Maps - More Info

This is not documentation on how to use the mapping features (please see "Documentation" for that), merely information about how they are implemented for those curious.

Maps are used in Geiger Bot to show primarily two different types of data.

    1. Your Data: User Map Points

      1. Point Features - Map data collected on your device with an attached Geiger counter

    2. Preloaded Map Data

      1. Point Features - EPA Uranium Mines

      2. Raster Tilesets - Safecast, NNSA, NURE, AIST, etc data

User Map Points

    • Each data point is its own object (MKAnnotation)

    • Advantages:

      • Immediate: Can instantly use and view data that you log without additional processing

      • Flexible: Can display using a simple shape or text label

        • Multiple attributes: Each data point is easily associated with multiple attributes on a per-point basis

      • Disadvantages:

        • Speed: Slower to display than raster datasets, for anything but very small datasets

          • If you do not need to view user points, it is recommend you turn the layer off for best performance.

      • No Math: Currently, cannot combine with raster layers using math (ie, background subtraction)

        • Less Development: The point features have simply not seen the development tile the raster layers have. They are much better than they were at first, but are relatively basic in terms of functionality.

    • Data: Stored on device, written at same time as log file

      • Storage requirements: ~47 bytes/point

Preloaded Data - EPA Uranium Mines

          • Point features can be displayed with a label

    • This is the only preloaded dataset like user points -- each point is its own object

    • Approximately 12,000 uranium mines and claims from the US EPA are included (US only)

      • Note, the original dataset does not distinguish mines, claims, and prospects.

    • Advantages:

      • Lower storage requirements: Storing this data as points uses less space than the equivalent rasters

      • Flexible: Can show mine symbols or a larger rectangle with the mine name

    • Disadvantages:

      • Speed: Like user point data, these are slower to display than a raster. Several levels of precached detail help with this.

        • Again, if you do not need to view them, it is recommended you turn this layer off.

    • Symbology:

      • The default symbology is a nod to how mines appeared in the game "Mass Effect" on the planet map.

      • The luminance of the mine symbol is relative to how many mines it represents

Preloaded Data - Raster Layers (many)

    • A variety of raster (image / gridded) datasets are included. Some of these are not available online. Most I have processed or created personally. Others, such as Geoscience Australia, required almost zero processing on my part.

    • These are split into tiles, much like normal web map image tiles. But instead of storing RGBA images, data values are stored.

      • To my knowledge, this is the only implementation using this technique. (although raster mosaic datasets themselves and raster pyramids have been around for a very long time)

      • Why isn't this more common? Because JavaScript can't intercept and reprocess image tiles like this AFAIK so this technique is not possible with a web map. Thus in applications where interactive raster functions are required, they are done on the server.

    • Advantages:

      • Speed: Very fast performance. The data is aligned exactly with the "Web Mercator" tiles used on a map. Natural alignment of data structures is quite efficient.

      • Math: Can combine with other raster layers quickly and dynamically without a server.

      • Data Processing: The nature of rasters makes it easy to do a variety of realtime processing effects, such as resize, background subtract, shadows, etc.

    • Disadvantages:

      • No per-point attributes: Raster cells only encode a single value

      • Limited resolution: For example, the Safecast data's max resolution is currently a 20m cell. Where exactly the point(s) were in that cell is unknown.

      • Significant pre-processing time:

        • 1. Data points are aggregated into X, Y, Z values, reprojected from lat/lon into Web Mercator coordinates

        • 2. A raster of the maximum resolution of the data is created at a target resolution, each cell containing an average of the point values

        • 3. This "master" raster is tiled into individual 256x256 pixel tiles

        • 4. Each of these resulting rasters is then resampled and combined in quads at half resolution until all tile levels up to 1 have been created

Rendering: Point Feature Data (user points / EPA uranium mines)

    • A database query is made for what is currently visible on the screen

    • Unfortunately, it is not always safe to directly create objects from the query results, as the high memory overhead is such that too many points can force a device reboot.

    • To prevent this from happening:

      • 1. Each query has a limit to the number of rows returned.

      • 2. The data is filtered/binned (aggregated)

        • This step saves memory by:

          • A. Not creating points that are completely hidden by other points

          • B. Binning overlapping or nearby points together, depending on user settings

        • The initial query results are loaded into vectors, not high-level objects, for efficiency

        • SIMD vectorized code performs data processing.

      • Binning can be free-form with a search radius, or gridded.

        • This "realtime" binning and coordinate system conversion is computationally expensive, but is much faster than creating more annotation objects.

      • It is still slow, but not as painfully slow as it once was.

    • After this pre-processing, the point objects are created, and added to the map. (the slowest part)

    • When the map is panned, the extent that needs to be redrawn is calculated, and any previous points are re-used if possible

      • Any threads currently loading or processing point data are aborted when the map pans due to thread concurrency issues.

    • The actual drawing is performed in the drawRect: method of the MKAnnotationView. This uses CPU-bound drawing functions for simple shapes or to render text.

Rendering: Raster Data

    • Overall, the raster data renderer creates a RGBA image tile from a data tile.

    • A custom tile provider class acts as a local image tile server

    • It is a rendering "pipeline" of sorts, though it does not use the GPU.

    • A rough sequence of events is:

      • 1. Index Check: Indices are used to determine if loading a tile should be attempted (to save disk I/O)

      • 2. Load/Convert/Cache: The tile is loaded through a caching proxy and converted to 32-bit floating point in its original units

      • 3. Preprocessing FX: Any preprocessing effects enabled for the layer are applied

      • 4. Resampling: If lower resolution than what is being requested, it is cropped and resampled using bilinear interpolation

      • 5. Sublayers [1 ... n]: If any sublayers, they are loaded in the same way, and combined sequentially using the specified raster math operation

        • 5a. Postprocessing FX (1): During layer combines, any postprocessing effects are encoded into the alpha channel for individual pixels, as each raster math operation flattens all tiles into a single layer iteratively

    • 6. LOD Shifts: If LOD shifts are selected, the tile is resampled. A template is multiplied with the alpha channel to create the gridded visual effect.

      • 7. Color Indexing: The final data tile is then normalized for the selected min/max, and the values are replaced with indices into the LUT color vectors

      • 8. Color Channel Creation: R/G/B channel data is created.

        • Postprocessing FX (2): If any post-processing effects were enabled, they are applied now before the premultiplied alpha.

      • 9. Alpha Premultiplicaiton: Alpha channel data is premultiplied and normalized. Premultiplied alpha is required on IOS.

      • 10. Final Rasterization: The tile is then converted and saturated to UInt8 RGBA values and interleaved, creating the actual image

      • 11. Rendering: The image is then rendered onto the map.

SIMD Vectorization

    • To improve performance, almost all map features use SIMD vectorization where possible (this also applies to all code in Geiger Bot)

    • This provides a ~ 5 - 10x improvement in performance over scalar code

      • Using the GPU provides a 5 - 10x improvement on top of that.

      • However, the iOS GPU cannot be used for GPGPU calculations and fragment shaders are limited to 8-bit datatypes only, so I don't see how to use the GPU at the moment.

    • The SIMD unit is part of the CPU, and can execute code simultaneously with the main CPU core

      • It executes an instruction on up to 4x 32-bit floating point values simultaneously

        • Hence the name: "Single Instruction, Multiple Data"

    • On ARM devices, the SIMD unit is named "NEON". It is present on all iOS devices.

    • x86/x64 CPUs also have SIMD units; two instruction sets for them you may be familiar with are MMX and SSE. The early MMX was rather terrible (integer only) but modern SIMD is quite good.

    • Compilers sometimes will auto-vectorize code, but typically not very well at all.

    • To exploit SIMD you need to code for it; framework libraries like vDSP (iOS / ARM) and DirectMath (Windows / x86/x64) make this easy.

Raster Data: Data Tiles

  • Data tiles are flexibly defined.

    • They can be stored:

      • Remotely

      • In the local file system

      • As PNGs

      • As .raw.gz files

      • In a SQLite database or in the filesystem

          • All current raster data is stored in raw form in a local database. In testing there was no advantage to individual files.

        • There is one database file per layer.

        • The metadata is not stored with the layer, other than a "DataCount" column.

          • DataCount is optional, but knowing it can allow cheating on some calculations later.

          • The database schema is similar to similar to MBTiles in some respects.

          • sqlite> .schema Tiles

          • CREATE TABLE Tiles(ID INTEGER PRIMARY KEY, TileX INT, TileY INT, TileLevel INT, DataCount INT, TileData BLOB);

          • CREATE INDEX idx_Tile_XYZ ON Tiles(TileX,TileY,TileLevel);

      • Data Types

        • Many different binary data types are defined

          • Currently, all layers use UInt8, UInt16, or float (32-bit) data types.

        • The raw raster data is converted back into the original unit of the layer with a factor stored in the metadata.

          • For example, the Safecast data is stored in nSv/h as a 16-bit integer, and divided by 1000.0 to convert back to uSv/h.

        • All data types are seamlessly wrapped to a 32-bit float tile in memory for processing

        • The data type used for a dataset depends on its min/max, and the amount of resolution between the data values desired.

          • For most radiation dose rate data, this works out to be UInt16 (0 - 65535), allowing for 0.000 uSv/h - 65.535 uSv/h at a resolution of 0.001 uSv/h. I found that lower resolution degraded the visualization.

          • The NNSA cesium activity layers are the only 32-bit float layers. This was done in order to not lose all detail in the low ranges of activity in the surveys near Tokyo.

    • "Native" RAW file type

      • This is simply the data values in row order (little-endian bye order)

      • Compressed with zlib's DEFLATE using maximum compression

      • Sort of like a PNG, only without the header and Paeth algorithm

    • Standardization / Data Tile Creation

      • Unfortunately, there are no standards for packing data into a tile.

      • Pretty much everything assumes creation of static RGBA display tiles only.

      • You can create 8-bit PNGs if the data can be effectively encoded using an 8-bit integer, but this will typically waste the other color and alpha channels. Very few programs will create 16-bit PNGs.

      • Note even by being clever and using multiple color channels, you can't get more than 8-bit precision from pretty much anything that makes image tiles, because they rasterize data values using an 8-bit indexed color palette.

Raster Data: Data Updates

    • Version 1.6.9 introduced in-app map data updates, for the first time allowing updates without a new app version.

    • Currently, this is for Safecast data only

  • Update process: (client)

      • 1. Update Prechecks

        • A confirmation is displayed to the user, due to the processing time requirement and download size for users with metered data plans

        • If the user approves the update, the update process purges any old temporary files that may exist, and checks for sufficient disk space

        • All current map layers are released, and the Apple Satellite basemap is selected to make the update process as fast as possible

      • 2. New Data Check

        • The timestamp of the last update is compared to the timestamp on the server

        • If the data on server is newer, the download is allowed to proceed

        • Additionally, the server filesize must pass a sanity check

          • This is to protect against having an update potentially result in no map data if something went wrong on the server.

      • 3. File download

        • The update file is downloaded to the client asynchronously.

        • This will continue for a short time even in the background.

        • Currently, resuming an interrupted download is not supported.

      • 4. File verification

        • The downloaded database file is checked via SQLite's PRAGMA integrity_check for errors.

        • Additionally, the schema and row count must pass a sanity test.

      • 5. Rasterization / Tiling process

        • (on an iPhone? madness!)

        • The initial update file, which contains XYZ point features, is rasterized to the base tile level (13)

        • After that, each tile level is used generate the next, from levels 12 to 1, by downsampling a "quad" of tiles into a single tile

        • This takes about 8 minutes (iPad 3) to 18 minutes (iPod Touch 3G) and is mostly due to processing time spent in zlib compression

      • 6. Tiling verification

        • After tiling is complete, it too must pass a sanity check

      • 7. Database defragmentation

        • For extra rendering performance, the SQLite VACUUM command is finally issued.

      • 8. Metadata update

        • If the update was successful, the new extent of the data and server timestamp are saved locally

        • The date of the update (GMT) will be used as part of the Safecast layer name

        • The bitmap index caches (see below on this page) are then issued a force reload command for the updated layer

  • Update process: (server)

      • 1. Every 4 hours, the server invokes a SQL script that ultimately creates the output database

        • Note: This means you can potentially update, and get the same data. Currently, the process does not have very good change detection built into it.

      • 2. The server aggregates the full Safecast dataset into projected Web Mercator pixel x/y points at ~19m resolution (tile level 13)

      • 3. For each pixel, only the most recent 270 days of measurements at that point are used.

        • The intent of this is to provide an up-to-date view of a dataset that has changed over time.

      • 4. Sanity checks and manual filtering are employed to provide the "cleanest" and most accurate and objective view of the data as possible.

  • Benefits

      • Allows for on-demand, near-realtime updates of Safecast map data

      • Exceptionally fast performance for a GIS tiling solution that works on any iOS mobile device

        • Actually outperforms ArcGIS and GDAL by several orders of magnitude for this dataset (I cheat)

      • Robust process with numerous checks and failsafes that works on iOS 4 to iOS 7

      • Resulting data tiles are interactive and work offline

  • Caveats

      • Performance

        • While the new tiling code is very fast, it's still annoying to wait 10 minutes for the data processing to complete

        • Thus, porting the tiling process to the server is something in development, to eliminate on-device data processing entirely.

      • No Interpolation Updates

        • Only the Safecast data layer is currently updated.

        • I don't have a good solution for regular auto-generated updates of the interpolation layer at this time.

      • No Per-Point Attributes (ie date)

        • As this is only a single raster layer, only one value is supported-- dose rate.

        • This is again, something in development.

        • One raster tileset is required per attribute, and thus the tiling needs to be moved to the server, or this alone would double processing times.

Raster Data: Dynamic Data Tiles

          • EPA Uranium Mines, Rendered via Dynamic Data Tile Interface

  • Data tiles, without the tile

      • Data tiles are based on the standard Web Mercator tile system

        • Normally, they require creation of a tiled dataset

        • This can be a slow process

      • Dynamic data tiles do not require preprocessing

        • They create ad-hoc data tiles from arbitrary lat/lon points

        • This allows point data to be used with all data tile functionality and layering

        • Currently available for EPA Uranium Mines and User Points data

      • No tiles: good and bad

        • Data is tiled for a reason: performance

        • Without this, dynamic data tiles can only work well with small datasets

    • Comparison:

      • vs. MKAnnotationViews

        • (+) Significantly better performance

      • (+) Full integration with layering and data tile functionality

      • (-) Cannot show labels or attributes

      • (-) Cannot show custom markers/symbology

      • vs. Data Tiles (pre-tiled)

        • (+) Do not require preprocessing and creation of a tile dataset

        • (+) Near-realtime with data sources that can change, such as user logged points

        • (+) Slight performance advantage for very small datasets

        • (-) Significantly slower for larger number of points

        • (-) Significantly more disk space used than tiles for large data sets

  • Processing Chain:

      • 1. Receive request for tile X/Y/Z

      • 2. Tile X/Y/Z is converted to lat/lon coordinates representing the tile's extent

      • 3. Database is queried for points within coordinate extent

      • 4. Results are projected to Web Mercator pixel X/Y coordinates, and then offset to tile origin

      • 5. Data values are accumulated by summation (Uranium Mines: mine count) or average (user points: dose rate).

      • 6. Tile returned and flows through data tile renderer as normal.

Raster Data: Indexing

    • Above: Actual Safecast data layer bitmap index from early 2013, showing:

      • 1. Layer Extent

        • (green dotted line, manually added to illustrate)

      • 2. Bitmap Index

        • (actual from bitmap index test code)

  • Why Indexing?

    • The main bottleneck with the map data layer is disk I/O throughput.

      • Yes, even though it's flash memory. I know.

    • Indexing can help mitigate that by reducing wasted I/O.

    • The I/O is not merely the data tiles themselves; the map itself does a lot of disk I/O, including writes to cache remote baselayer imagery, and the VectorKit maps (Apple Road/Apple Hybrid) are especially I/O intensive.

    • There is also a lot of data to load; an iPad with Retina Display is higher resolution than most desktop monitors.

    • I/O Bottlenecking: Platform Differences

      • If Geiger Bot was a desktop app...

        • The PC/Mac would automatically simply cache the entire tileset into RAM.

          • In fact, it does this on the iOS simulator. Quite speedy.

        • Given that the largest tile database is currently about 30mb, it is a meaningless task to a machine with 8GB+ RAM, one taken for granted.

      • Mobile Platform Issues

        • iOS does have read caching, but mobile devices don't have the RAM to cache much data.

          • Remember, with an iPad 3 or 4 it's driving a display higher resolution than most desktop monitors, but with a fraction of the resources.

          • Caching the basemap tiles takes most of the RAM that is there. (iPad 3, 1GB RAM)

          • And when the in-memory cache runs out? Disk I/O.

        • But the data alone is not the only thing missing from a read cache; it is the indexing.

        • This means the cost to attempt to blindly load tiles that do not exist is relatively quite high -- whether they are in a SQLite database or directly in the file system individually.

          • Cost: 1-10ms / tile, peaking at 100ms/tile, to find out the tile wasn't even there (iPad 3)

        • Bottom line: the cost of attempting to load data that doesn't exist can be greater than the cost of actually loading the data that does exist on screen for datasets that have very fragmented and porous spatial distribution.

  • Two strategies are employed to reduce this:

    • 1. Extent Metadata

      • With each layer the extent is always available as part of the metadata.

        • (An extent is the north/south/east/west coordinates describing the area something covers)

      • This works very well for some datasets but not others, such as the Safecast dataset which has a nearly global extent.

      • By checking the extent, it is possible to avoid attempting to load a tile entirely.

    • 2. Bitmap Indexing

        • A bitmap index is based upon the principle that what a map looks like when zoomed out is similar to what it looks like when zoomed in.

        • Why? The extent didn't work very well for the main dataset being displayed (Safecast).

        • To address this, I began looking at a relatively common binary spatial index, quadtrees. Which quickly reminded me of the raster tile pyramid itself. While it is probably true this is less brilliant deduction and more me being lazy and not wanting to implement quadtrees, I would argue this actually ended up being more efficient.

      • A. The Data Is The Index

          • Bitmap indexing exploits the very nature of the Web Mercator tile system itself

            • The Web Mercator tile system's structure is already a precalculated binary tree of sorts.

            • It is basically cheating, and the computational equivalent of copying someone else's math homework.

        • Compared to the extent, a bitmap index provides much more information.

          • Useful for cases like the Safecast data layer, where most tiles being loaded in the extent are empty.

        • A bitmap index is (usually) a single 256x256 image tile.

          • Most current layers only require a single tile to provide 100% coverage of whether or not a tile exists.

          • An algorithm will automatically select the tile that provides 100% coverage if possible.

      • B. Putting The "Bit" in "Bitmap"

        • After creating the individual index described above as a test, I saw that it was good.

          • Unfortunately, it could not scale to be useful in that form -- it would require too much RAM.

            • Even as a single byte, each tile = 64KB. I expected I needed at least 50, using 3+ MB of RAM.

          • But I also saw it could be improved further and evolved.

          • "Bitmap" is not used as an abstract term here

            • The data is literally stored as one bit per pixel.

        • The 256x256 tile is reduced to a 64x64 16-bit tile, using 8kb RAM.

          • Each 16-bit "pixel" value is then a 4x4 "sub-pixel" group.

        • Bit shifting is used to read and set individual values within each 16-bit pixel value.

        • Translation methods make working with this seamless and referenced like the original tile was.

      • C. When The World (Tile) Is Not Enough

        • Some datasets cannot be completely indexed by a single tile.

        • A tile can only be a 100% perfect index for a certain number of zoom levels. (spoiler: 8)

        • If the layer has tiles beyond that, there will be many "misses" as it tries to load them from disk.

          • As its maximum resolution is exceeded, its effectiveness degrades quickly.

        • Thus, for such layers, additional caches are created from the [maxZ] - 8 tiles.

          • For example, the Safecast data is rendered to a maximum tile level of 13.

          • So, the app loads every single Safecast data tile from tile level z=5, and store them into bit form.

          • These tiles complete a perfect index of the dataset.

      • D. Performance Considerations

          • Too Bloated

            • Initial memory use for complete indices of all layers was > 1 MB, which I considered too high.

            • Some layers are mirrors of other layers in terms of everything except data value.

            • Thus,the layer metadata was extended to include an optional reference to a index cache proxy.

              • This is implemented in the cache manager as a simple lookup table created at runtime.

              • This saves a little memory for datasets like the NNSA Cs-134 and Cs-137 layers which only need 1 bitmap index tile anyway.

              • It saved a lot more memory for the Safecast historical dataset, which uses the index for the newer dataset. This is not perfect per se as there are differences, but it is a secondary dataset and mostly correct.

            • This also saved significant disk I/O in creating the indices.

          • Too Slow

            • Loading speeds for creating the indices were still much too slow.

            • Stuff that didn't work:

              • 1. App Startup

                • Problem: delayed startup of app by ~10 seconds.

                • This was terrible, and most people hate slow app startups. Including me.

              • 2. Lazy Loading (as needed, per tile)

                • Problem: exacerbated existing I/O bottlenecks trying to load index data when the map was already slamming I/O. And, once the index was (finally) loaded, it might not even be used unless you went back to the specific area, as it took so long it wasn't ready for the initial use.

                • This was actually even more terrible, and was not worth using at all.

              • 3. Delayed Loads in Low-Priority Dispatch Queue (background thread)

                • Problem: "low priority" is processor, not disk queue priority. The app was sluggish and unresponsive after loading for a noticeable time for no apparent reason at all, and it certainly didn't feel any faster when you started using it.

                • Not quite as terrible as #2, but still bad.

            • Stephen King wrote something to the effect of, "As everyone knows, it is impossible to become truly invisible, but it is possible to make one's self become dim and barely noticed."

            • The final implementation was similar to that. Ultimately it proved impossible to perfectly hide that much I/O, or wait for perfect times of zero disk activity. Instead, a hybrid approach was used to minimize it and make it difficult to notice -- combining the strengths and minimizing the weaknesses of multiple techniques that didn't work particularly well on their own.

            • This I judged as acceptable; it was difficult to tell it happened, logs showed it completing quickly, and map performance was improved as expected.

      • E. Final Bitmap Indexing Performance

        • Checking a bitmap index: < 0.01ms

        • Loading them takes much longer (disk I/O), and they are not all available when the app initially launches.

        • Sequence of events:

          • 1. A manager class is instantiated as a singleton when the app starts

          • 2. Global caches are created for each layer in the background at a low priority

            • 3. Annoying layers needing multiple tiles for full coverage are then queried

          • 4. A master index of the individual bitmap indices is created and the manager allows requests to come in

          • 5. Non-global caches found in #3 are lazily loaded in bulk for entire layers upon layer selection/use

          • Overall:

            • Performance gains, per full screen render, randomly zoomed in on Japan with the default layer.

              • Minimum: ~350ms improvement on iPad 3

              • Maximum: ~3500ms improvement on iPad 3

            • Memory use: < 512KB when fully allocated

Raster Data: Querying

    • The exact value of a cell can be shown with a query tool

    • Symbology:

      • Target Designation: Green/black FN-style reticle

      • Primary Label: Raster cell value + unit

      • Secondary Label:

        • Normal: Latitude, longitude as WGS84 decimal degrees

        • Autotarget: Distance to target + unit (km or m, depending)

    • Activation:

      • Normal: While top menu is shown on map

      • Autotarget: While map is full screen and user tracking is enabled

    • Display Updates: Continuously computed with movement, up to 30Hz

    • Query method:

      • This uses its own instance of the tile provider class.

      • It loads the data in the same way as the image tile renderer does

      • It omits all alpha channel work and final rasterization to save speed

      • It instead decomposes the tile into X/Y/Z vectors and caches those. It is very fast once this has been cached.

      • It looks at only one tile -- what's directly under the reticle

    • A SIMD vectorized Pythagoreas' theorem is used compute the distance to all pixels in the tile

      • The pixel closest to the center of the reticle is returned and displayed. It will not return pixels outside the reticle.

        • It is very hard to exactly target a single pixel on a retina display, hence the assist

        • It used to kind of magnetically "snap" the center of the map to the nearest pixel, but that made panning annoying.

    • Autotarget Mode:

      • This is activated when the map is full screen and tracking you (ie, following your movement)

      • Demo video here

      • In this configuration, the reticle will "autotarget" and move to the closest point in the tile to your location, or the edge of the screen if that pixel is offscreen

      • It is displayed along with the distance in meters (or km), calculated after coordinate system conversion

      • This even works when the map is rotated, which was a giant math headache.

Raster and User Point Data: Coloring (LUTs)

    • Color lookup tables, or LUTs, are used to rasterize data values into RGBA values.

    • This is also known as "indexed color"

    • These are stored as 8-bit (256 color) LUTs on disk, and interpolated to 16-bit (65,536) colors

      • This is relatively unique feature -- pretty much all other GIS software only uses 8-bit indexed color values

      • This provides smoother colors and allows additional detail to be seen in some cases

      • This uses slightly more memory, but takes no longer to execute (SIMD vectorized indexed table lookup)

      • The interpolation is linear in the RGB colorspace

        • I wrote code for Lab CIE and HSV colorspace conversion, but there was no difference in the results, other than HSV occasionally doing some odd colorspace wrapping (as you'd expect with HSV)

    • To better represent low-range values than linear scaling, the LUT indices are reshuffled using logarithmic functions, increasing low range contrast at the expense of high range contrast

      • This is similar to a gamma stretch z=z ^ (1/y) but faster as it is only done once, and logarithms provide a somewhat better fit to the data distribution

    • The unique combination of both 16-bit LUTs and logarithmic scaling allows for a full range display of radiological data without clipping and with well-maintained contrast at all ranges.

    • In other words, prettier colors on the map

  • Additional LUT options:

    • MIn/Max:

      • Manually set via swiping on the LUT control or via a preselected range in the LUT picker

      • The min/max is autoset when the mode is changed

    • Discretize

      • This breaks the LUT into an averaged, limited color palette, trading data resolution for easier classification

      • Also, bilinear interpolation + discretize makes for neat geometric shapes when zooming in

    • Invert

      • Reverses the LUT order (does not invert RGB values)

      • Technically, "rainbow" LUTs are backwards to begin with; blue/violet are higher energy and shorter wavelength than red

    • Scale Mode

      • Linear

        • Direct representation of LUT. Not great for displaying most uSv/h data. Best choice for uranium ppm.

      • LN

        • Uses 4x ln (natural logarithm) functions to reshuffle LUT indices, increasing low-range contrast, but sacrificing high-range.

        • Similar to a gamma stretch but different distribution more suited to uSv/h data.

        • Called "LOG" in previous Geiger Bot versions.

      • LOG10

        • Index reshuffle, performed 3x iteratively

        • A more extreme version of LN

        • Approximate stretch values in ArcMap: min: 0.3, max: 10.0, gamma stretch: 5.0.

  • Exceptions

    • "Safecast" LUTs are meant to emulate symbology, scaling and value ranges elsewhere.

      • Because of this, they are fixed, and the min/max, scaling, etc cannot be modified.

    • "Isotope" LUTs are special LUTs not used by the map, only gamma spectroscopy

      • They have predefined highlighted ranges for certain energy lines used to help with isotope ID

      • They are the "Blue-Red Blended" LUT with gray overwriting all non-key energies

            • 8-bit LUT (LN)

            • 16-bit LUT (LN)

          • 16-bit + new LOG10 (3x) scaling

Raster Data: Interpolation

  • When resizing raster data tiles, a combination of nearest neighbor and bilinear interpolation are used

  • Nearest neighbor is used as a "cheat" for only a +1 difference in tile levels.

    • This is slightly faster, but mainly minimizes tile boundary artifacts which occur with bilinear interpolation.

    • With a 2x resize, the difference is not really noticeable on a retina display

  • In all other cases, bilinear interpolation is used

    • This is unique implementation of bilinear that fully respects NODATA values using a modified 1st pass and 3rd pass algorithm. I am not aware of any other implementation that respects NODATAs.

      • In the event NODATAs are encountered, it will fall back on linear interpolation (one direction only), or nearest neighbor

  • This is actually pretty optimized, and takes ~30ms on device for a 256x256 tile.

  • However, if the tile needs to be resized by more than 6 tile levels, it will only be a single pixel.

    • Thus, bilinear interpolation is performed twice in some cases to prevent this.

    • Any performance impact is offset by the high likelihood of the tile being cached, as one cache can be likely reused for many on-screen tiles that far zoomed in

          • Nearest Neighbor Interpolation

            • Bilinear Interpolation

            • (actually non-linear and quadratic if you want to get all technical about it)