Chelsea Scheske
The model presented below provides the user with tools to identify potential locations for settlements on a new planet, based on available resources and identified restrictions.
Figure 1: New Hope
The year is 2093, and we've finally done it. Done what, you ask? Destroyed the earth, of course! With our planet on it's last legs, humans must find a new home, or perish.
Half a century ago, a gang of pessimistic scientists headed by Elon Musk sent out a fleet of robots to gather information on potential new homes. They've identified a planet with all the trappings -- it has an atmosphere with traces of oxygen and carbon dioxide that can be improved with terraforming. There are even waterways under the surface, that can be accessed through mining. As a bonus, the scientists have also identified the first proof of alien life! The planet is dubbed New Hope. It all seems too good to be true!
Well, it is. Unfortunately, New Hope is populated with particularly aggressive, unpleasant creatures, known colloquially as Smorgs. Fortunately, these creatures, though aggressive, do not appear to venture far out of their habitat. Unhappily, the surface of the planet is also dotted with, for wont of a better label, Stink Geysers. These are geographical formations that periodically spew forth a burst of noxious fumes. According to data, these are not deadly. The fumes are, however, decidedly unpleasant, and seem to interfere with electronic signals. The scientists have identified them as "best to avoid".
Despite these challenges, scientists believe that the planet can be safely colonized, if the settlement and travel routes are designed to avoid these potentially dangerous components. In addition, the technology is available to transport all those left on earth to the new planet, if a suitable location can be found.
Preparations are underway -- food and water pods, emergency evacuation pods, and health pods have been established remotely on the planet's surface in preparation for human arrival. However, despite the fact that they managed to design robots to travel at light-speed, the aforementioned scientists are unable to design an algorithm that will identify a safe space for a new settlement, and a shuttle, avoiding obstacles and allowing for easy access to the food, water, health, and emergency evacuation pods.
Luckily, ArcGIS was invented before the apocalypse, and clever students from Dr. Cardille's Advanced GIS class come to the rescue. They have developed a model that will help scientists identify the optimal location for a shuttle landing pad, and the settlement. The project presented below explores this process, and reveals the results of their painstaking work.
To ensure the best chance of survival on New Hope, a shuttle landing pad a settlement location, and a route between them must be identified that avoid:
In addition, the settlement, landing pad, and travel route must be as close as possible to:
Planet New Hope, Milky Way Galaxy, Universe.
Project Objective: Identify two suitable locations for settlement and shuttle landing on New Hope, as well as a travel route between them that avoids the negative planet characteristics, and favors the supports and resources provided.
Questions: How can ArcGIS and ArcPy best be applied to solve this problem? Are there differences in outcomes if different tools are used?
Data collected by explorer robots was employed to develop the model for colonization of New Hope. The data inputs used to solve this problem are as follows:
Figure 2 shows the initial map of New Hope with the geographical data gathered from the explorer bots.
Figure 2: Initial Map of New Hope Surface, including locations of resources (food and water supply pods, emergency health pods, and emergency evacuation pods) and danger zones (stink geysers, alien habitats)
In order to determine optimal locations for a settlement and a shuttle pad, a model is developed using ArcGIS Model Builder, then the script is exported to Python for further analysis, data acquisition, and simplification. Figure 3 shows the final model developed to meet the project objectives. It shows the treatment of the original input data with several ArcGIS tools. Note that, before any data analysis was performed, the Project Tool was used to project the data into NAD 1983 Canada Atlas Lambert Conformal Conic. This was chosen as the original data was taken from Quebec, Canada, where this particular projection is said to have an accuracy of up to 1.0m (Map Tiler, 2019). The NAD 1983 Projection expresses linear units in meters, which becomes important during slope and other geographical algebraic calculations. The primary tools employed include:
Additional tools were used to treat the provided data to prepare it for use with the main tools, or for display. These include:
Figure 3: Final Model Builder Model for Settlement on New Hope
Mosaic to New Raster
Two DEM (Digital Elevation Model) files were provided, each representing one half of the extent. Specifically, 021l14_0200_deme.dem represented the Eastern half of the extent, and 021l14_0200_demw.dem represented the Western half. In order to use this data meaninfully, it had to be merged into a single raster file. The Mosaic to New Raster tool was used to this end.
Inputs: DEM East (Raster), DEM West (Raster)
Outputs: DEM Merged (Raster)
Parameters: Pixel Type -> "16_BIT_UNSIGNED"; Number of bands -> "1"; Mosaic Operator -> "LAST"; Mosaic Colormap Mode -> "FIRST"
DEMMerge = arcpy.MosaicToNewRaster_management(demE;demW, Scratch_gdb, "DEMMerge", projection, pixelType, "", numberOfBands, mosaicOperator, mosaicColormapMode)
The Slope Tool
The Spatial Analyst Slope tool was applied to the output of the Mosaic to New Raster operation, the merged DEM files. The importance of understanding the specifics of the projection applied to your input layers come into play here. When a projection is applied to a data-set, a linear unit is applied. As noted, the NAD 1983 Canada Atlas Lambert Conformal Conic projection uses a linear unit of METERS, however, many projections use FEET. The z-factor can be leveraged to improve the accuracy of results, when there is a discrepancy between the x,y units, and the z units. According to ESRI (2019):
"The z-factor is a conversion factor that adjusts the units of measure for the vertical (or elevation) units when they are different from the horizontal coordinate (x,y) units of the input surface. It is the number of ground x,y units in one surface z-unit. If the vertical units are not corrected to the horizontal units, the results of surface tools will not be correct."
The default z-unit is METERS. Then, if the x,y linear units of the inputs are in FEET, the z-factor must be adjusted by multiplying it by 0.3048, to convert the z units to feet as well, otherwise, the results will be skewed. In this case, since the NAD 1983 Lambert projection system has METERS as the linear unit, a z-factor of 1, the default, was used. The output is presented in Figure 6.
Input: DEMMerge (Raster)
Output: Slope (Raster)
Parameters: Output Measurement -> PERCENT_RISE; Method -> PLANAR; Z factor -> "1"; Z unit -> METER
slopeCalc = arcpy.gp.Slope_sa(DEMMerge, Slope_1, "DEGREE", "1", "PLANAR", "METER")
Figure 7: Slope Calculation Output Raster
The Euclidean Distance Tool
Euclidean Distance is a technique used to calculate a cells relationship to a source or input. In this case, the tool is used to describe the relationship of the surrounding cells to the main waterbody, the alien habitat zones, food and water pods, and the other inputs described above. The euclidean distance is determined by calculating the hypotenuse of the x_max and y_max distances from the cell to the nearest source, as determined by the projection (ESRI, 2019b). Using the Euclidean Distance Tool, with an input cell size of approximately 25.41 (equal to the cell size of the original DEM files), Euclidean Distance Rasters were calculated for each of the raster inputs. The results are presented in Figures 8-13.
Environmental Settings
The importance of establishing environmental settings
Note that the boundaries of the output raster overshoot the boundaries of the original input file. In subsequent steps, the Euclidean Distance Rasters will be "cut" to match the base map, so that only useful data is included in the analysis. The model was built this way intentionally; during the first trials of the model, the Processing Extent of the Environmental Settings of the Model Builder Model were not set. This resulted in maps that did not cover the full extent, and results that were not representative of the context. Once the issue was identified, the Processing Extent was set to match that of the original Merged DEM files, which became the Standard Extent for the model. Snap Raster was also activated, using this same Merged DEM file. This adjustment resulted in a more robust and accurate analysis.
Input: Input Shapefiles (.shp)
Output: Euclidean Distance Raster, Direction Output Raster
Parameters: Output Cellsize -> 25.411126334889
euclideanDistance = arcpy.gp.EucDistance_sa(shapeFile, distanceOutputRaster, "", cellSize, directionOutputRaster)
The Extract by Mask Tool
The Extract by Mask tool is used to remove excess data from the Euclidean Distance Outputs, so that only data in the study extent is included in the analysis. In the current analysis, each Euclidean Distance Output Raster was treated with Extract by Mask tool, with the mask file set as the Standard Extent, based on the DEM Merged file. It is important to perform the extraction early in the analysis to limit computation time.
Extract by Mask vs. Clip: Is there a difference?
According to GIS Blog (2019), the Clip and the Extract by Mask tool are essentially the same. They both take two rasters, or a raster and a polygon shape file, as inputs, and produce a raster containing only the cells overlapped by the mask, or the clip raster.
Inputs: Euclidean Distance Output Raster; Feature or Raster Mask Data (DEM Merged file)
Output: Extracted Raster Mask
Code Sample:maskRaster = arcpy.gp.ExtractByMask_sa(distFile, standardExtent, maskOutputRaster)
The Slice Tool
In order to identify the most suitable settlement sites, the comparison rasters must be converted to a comparable scale. To achieve this, the Slice Tool is used to categorize the data from each Euclidean Distance Raster into 10 classes.
Natural Jenks vs. Equal Interval
When applying the Slice Tool, the classification can be done by EQUAL_INTERVAL, EQUAL_AREA, or NATURAL_JENKS. According to the ArcGIS documentation, NATURAL_JENKS is not well suited for comparing maps built from different underlying information, while EQUAL_INTERVAL is the most appropriate classification type to use in the current situation (2019c).
Slice vs. Reclassify
While the Slice Tool organizes existing data into the specified number of groups, the Reclassify Tool produces an output raster with different cell values than the original, based on the statistical transformation identified by the user. Reclassify is often used with Suitability Analysis projects (ESRI, 2019d). However, for the current project Slice was chosen, due primarily to the ease of manipulating the tool in ArcPy. The script used to execute the Reclassify tool proved to be too complicated to manipulate in a way that simplified the python script (at least given my level of experience with it). The Slice tool tends to be better applied to normally distributed data (ESRI, 2019d), which is not representative of the bulk of the data in the current study. However, a comparison of the results obtained when using Reclassify vs Slice shows little difference, supporting the substitution of the Slice tool in this case. Figure 19 shows the results of the Reclassify tool and Figure 20 shows the results of the Slice tool, both using the 10 Equal Intervals option. A slight loss of differentiation between layers is apparent when using the Slice tool, but this difference was taken as insignificant for the purposes of the current study.
Inputs: Mask Output Rasters for Each Variable
Outputs: Slice Output Raster for Each Variable
Parameters: Number of Output Zones --> "10"; Method --> "EQUAL_INTERVAL"
Slice = arcpy.gp.Slice_sa(maskFile, reclassRaster, "10", "EQUAL_INTERVAL", "1")
Figure 19: Food and Water Pods - Reclassify Tool: 10 Equal Intervals
Figure 20: Food and Water Pods - Slice: 10 Equal Intervals
The Weighted Overlay Tool
The Weighted Overlay Tool is a Spatial Analyst tool that is often applied solve multi-criteria problems, and is particularly well suited for Suitability Analysis and Site Selection Models (ESRI, 2019e). The tool takes input criteria and reclassifies them according to a common preference scale. In the current project, two weighted overlays were performed, one for a the suitability analysis, and another for the least cost path calculation. In each, seven inputs were ranked on a scale of 1-10. The ranking was approached differently depending on the desired outcome (see below for an explanation). Each of the inputs was also weighed according to their importance, or influence, relative to each other. A total of 100% was divided among the seven inputs, as demonstrated in Figure 21. Avoiding the Alien Habitats was afforded the most weight (29%), followed by nearness to Food and Water Pods (20%), nearness to the main Water Body (15%), nearness to Health Pods (12%), Emergency Evacuation Pods (10%), and distance from Stink Geysers (9%). Given the low gravity situation on New Hope, and the advanced construction technology available, the slope was considered the least important variable, at (5%).
Figure 21: Weighted Overlay Table for Suitability Analysis
3.5.1 Weighted Overlay for Suitability Analysis
The Weighted Overlay for the Suitability Analysis was performed on the seven reclassified raster inputs on a scale of 1-10, with 1 being least preferable, and 10 being most preferable. Restricted values are, in reality assigned values of -1 on the Weighted Overlay scale. Restricted values were assigned to the steepest Slope class (Class 10), to the two values closest to the Alien Habitats (Classes 1 & 2), to Class 1 of the Stink Geyser Raster (the class nearest to the Stink Geyser), and to the first class of the Main Water Body. The result of the weighted overlay is a Suitable Areas Raster, which will be discussed in the 4.0 Results section.
3.5.1.1 Justification for Weighing Scheme
In the case of the Alien Habitats, the two closest classes were restricted to reflect the risk inherent in settling or creating a travel route too near the homes of the creatures. The Stink Geysers, while not technically lethal, do erupt with significant force from time to time, so settling near them is not impossible, but settling directly on top of them is inadvisable. Likewise, settling directly on top of the Main Waterway is impossible, so Class 1 was restricted for this input.
Inputs: Reclassified Rasters of study variables
Outputs: Suitability Raster (contains areas that meet the suitability criteria based on the Weighted Overlay Analysis)
Parameters: (see figures 22-28)
Code Sample:
arcpy.gp.WeightedOverlay_sa("('C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\ReclassMainWater' 15 'Value' (1 Restricted; 2 10; 3 9; 4 8; 5 7; 6 6; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_smor1' 29 'Value' (1 Restricted; 2 Restricted; 3 1; 4 2; 5 3; 6 4; 7 7; 8 8; 9 9; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_Slop1' 5 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 Restricted; 10 Restricted;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassFoodWater' 20 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassHealthPod' 12 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassEmergEvac' 10 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassStinkGeiser' 9 'Value' (1 Restricted; 2 1; 3 2; 4 3; 5 4; 6 5; 7 7; 8 8; 9 9; 10 10;NODATA NODATA));1 10 1", SuitableAreas_3)
3.5.1.1 Con Tool
The Suitable Areas Raster is further processed using the Con Tool, which performs a conditional if/else evaluation on each of the input cells of an input raster (Esri, 2019f). In our case, the Con Tool was supplied with Value = 8; in other words, the tool produces an output raster that consists only of the values of the input raster equal to 8. This is essentially, the "optimal" areas of our analysis. The result was split into two main areas, which are separated into two distinct raster layers using the Export Data > current extent option in layer properties. One is selected as the Suitable Settlement Raster, or the destination input, and the other as the Suitable Shuttle Pad Raster, or the source input for the following Cost Path Analysis.
Inputs: Suitable Areas, input twice, once as the Raster Input, and once as the True Raster
Outputs: Optimal Areas Raster
Parameters: "VALUE = 8"
Code Sample
optimalAreas = arcpy.gp.Con_sa(SuitableAreas, SuitableAreas, suitableOnly, "", "VALUE = 8")
3.5.2 Weighted Overlay for Least Cost Path Calculation
The final step of the analysis process is to calculate a Least Cost Path between the areas found to be suitable according to the Weighted Overlay Analysis described in 3.5.1. A Weighted Overlay Analysis is performed again, though the values are scaled with a different result in mind. During a cost analysis, the variable classes that are most desirable are given a value of 1, making them the least costly, while the least desirable variables are given higher numbers, in this case 10. Though the scaling for each of the variables will not be shown here, Figures 29 and 30 show the differences in scaling techniques used to produce the two different Weighted Overlay outcomes.
3.5.2.1 The use of RESTRICTED in Weighted Overlay Analysis for Cost Path Calculation
It is important to note that the RESTRICTED scale does not perform in a Cost Path Analysis the way it does in a Suitability Analysis. RESTRICTED values are assessed by the tool as value: -1, making this more desirable from a least cost path perspective. This can (and did!) cause results counter to expected. Instead of using RESTRICTED, for the purpose of the present analysis, the highest value, in this case, 10, was used.
Code Sample:
arcpy.gp.WeightedOverlay_sa("('C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\ReclassMainWater' 15 'Value' (1 10; 2 1; 3 2; 4 3; 5 4; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_smor1' 25 'Value' (1 10; 2 10; 3 9; 4 8; 5 7; 6 6; 7 5; 8 4; 9 3; 10 2;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_Slop1' 5 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassFoodWater' 24 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassHealthPod' 12 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassEmergEvac' 10 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassStinkGeiser' 9 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA));1 10 1", costSurface)
Inputs: Reclassified rasters of study variables
Outputs: Cost Surface raster
Parameters: varied, see Figure 30 for example
Figure 29: Weighted Overlay Table Inputs - Alien Habitats - Suitability Analysis
Figure 30: Weighted Overlay Table Inputs - Alien Habitats - Cost Path Analysis
3.6.1 The Cost Distance Tool
The Cost Path tool is usually run with the Cost Distance tool. The Cost Distance tool takes a source file (raster, or other feature class) and calculates the least accumulative cost distance for each cell to the nearest source over a cost surface (ESRI 2019g). The tool produces two output rasters: the Cost Distance raster, and the Cost Backlink raster, which are passed to the Cost Path Tool as inputs.
Inputs: Cost Surface Raster (from Weighted Overlay Table configured to Least Cost Path Method)
Outputs: Cost Distance raster, Cost Backlink raster
Parameters: (none used)
Code Sample:
costDistanceRaster = arcpy.gp.CostDistance_sa(optimalSettlement, costSurface, OutputCostDistance, "", cost_bklnk, "", "", "", "", "")
3.6.2 The Cost Path Tool
The Cost Path Tool calculates the least cost path from one input (source) to another (destination). It works best with raster inputs. In this case, BEST_SINGLE was chosen as the calculation type. Using this method, for all cells on the input destination data, the least-cost path is derived from the cell with the minimum of the least-cost paths to source cells.
Inputs: Cost Distance raster, Cost Backlink raster, Destination raster (Optimal Shuttle Base Location)
Outputs: Cost Path raster
Parameters: "Method = BEST_SINGLE"; "Destination Field = VALUE"
Code Sample:
costPath = arcpy.gp.CostPath_sa(optimalShuttleBase, OutputCostDistance, cost_bklnk, costPath, "BEST_SINGLE", "Value")
3.6.3 Raster to Polyline Tool
The final step in the methodology is a conversion of the Cost Path raster to a polyline. This is, for the most part, an aesthetic choice, as the polyline can be more easily manipulated for display purposes. The Raster to Polyline Tool is used to this end. It simply takes the Cost Path raster as an input, and returns a Cost Path polyline as an output. The user can configure the output slightly, by specifying the destination field, and the Background Value. If the Background Value is set to ZERO, all values greater than ZERO will be considered part of the polyline. If the Background Value is set to NODATA, any value with data in the input raster will be considered part of the polyline (ESRI, 2019h). ZERO was selected for the present purposes.
Inputs: Cost Path raster
Outputs: Cost Path polyline
Parameters: "Destination Field = VALUE"; "Background Value = ZERO", Simplify Polyline
Code Sample:
costPathRaster = arcpy.RasterToPolyline_conversion(costPath, costPathRaster, "ZERO", "0", "SIMPLIFY", "VALUE")
Success! After applying the techniques discussed above, the suitability map shown in Figure 31, was produced. The analysis identified two main areas, optimal for settlement or a shuttle base. The area to the NW of the map was chosen as the Settlement Area, and the area near the SE corner was identified as optimal for a space shuttle base. The next step is to determine the most efficient path between the two main areas, avoiding restricted areas while passing by as many resource pods as possible to simplify the resupply process.
Figure 31: Results of the Suitability Analysis for a New Hope Shuttle Base and Settlement
Figures 32 and 33 show the results of the Least Cost Path analysis. The path appears to pass over a short span of water, and seems to, for the most part, avoiding restricted areas and passing nearer to resource points. These results will be discussed further below.
Figure 32: Results of Least Cost Path Analysis for New Hope, showing the optimal path between the two optimal areas.
Figure 33: Close up of Least Cost Path Raster, including resource points (GREEN) and restricted areas (RED).
The results of the analysis indicate that there are two potential areas for settlement in New Hope. It seems that the analysis is relatively robust, as the settlements are located in areas that, upon visual inspection (see Figure 33), meet the criteria of low slope, far from restricted areas (RED DOTS), and near to several resources (GREEN DOTS). The techniques used here are common to suitability analyses performed with ArcGIS. However, the Cost Path Raster is exhibiting some strange behavior, particularly where it crosses the Main Water Body. A cluster of Alien Habitats on the North side of the water body leads one to expect that the path would avoid this area.
5.1 Potential reasons for an unexpected Cost Path
It is possible that a misapplication of influence in the Weighted Overlay Analysis, or an error in scoring the variables during the same process have led to this result. Conversely, it may be that the results reflect the context properly, and that there are interacting effects among the variables that are not simple to interpret visually. Further tweaking of the Weighted Overlay Tool inputs may be required to settle this seeming discrepancy. It is conceivable that allocating a greater weight to the Alien Habitats will change the behaviour so that the Cost Path performs in a more predictable manner.
An unfamiliarity with several parameters used in the tools may have contributed to errors in the results. In particular, as mentioned above, the Cost Path does not appear to behave as expected. There are several points during the analysis where errors in setting tool parameters may have compounded to produce unexpected results. These have been noted above, in the methodology section, but will be briefly summarized here.
6.1 Slice Tool
The Slice Tool was chosen over Reclassify , for reasons discussed above. However, Reclassify is the more common tool used in suitability analyses (ESRI, 2019d). Though the choice of Slice did not seem to perceptibly affect the results of the reclassification step, it is possible that small differences between these two methods compounded to produce issues in the results.
6.2 Natural Jenks vs. EQUAL_INTERVAL
The EQUAL_INTERVAL classification method was chosen, based on an interpretation of the ESRI documentation (2019c). However, the Natural Jenks classification method may have been more suitable.
6.3 Weighted Overlay
The most likely source of errors in analysis is the Weighted Overlay Tool. The scaling was, frankly, confusing, and while great effort was made to perform the Suitability and Cost Path Weighted Overlay analyses correctly, there may have been an issue in logic interpretation, leading directly to the strange results shown in the Cost Path.
6.4 Python Code
By far the most time consuming aspect of the project was working with the Python Code. Many seemingly logical operations were attempted and eventually discarded after hours of failure. Loop logic, in particular, caused significant frustration. In the future, a focus on simplicity and functionality will be the goal. In addition, there are likely ways to simplify and condense the existing code, that I was not able to identify in a reasonable amount of time.
After hours of painstaking work, the GIS team has identified two areas suitable for construction on New Hope! Everyone is very pleased! They have asked for some time before confirming the route between the two locations, as their model has produced a path leading directly through a warren of the aggressive aliens, and they suspect that a calculation error has occurred. A particularly paranoid member of the team has raised the suspicion that ArcGIS has become sentient and has purposely developed this path to put an end to the last of the humans, but the other team members have gently asked her if she has maybe been staring at the Python code for too long and just needs a break -- probably ArcGIS isn't evil... Probably. Luckily, the GIS team was taught well, and they have identified this potential error in time to rectify it. The human race is saved!!
The following Python Script was developed from script exported from a Model Builder Model. The script allows the user to identify potential locations for settlements on a new planet, based on available resources and identified restrictions. The tools "Euclidean Distance", "Mosaic to New Raster", "Slope", "Extract by Mask", "Slice", "Weighted Overlay", and "Con", are employed to this end. A Least Cost Path between the two best is #identified, using the "Weighted Overlay", "Cost Distance", "Cost Path", and "Raster to Polyline" tools. Output from the # Slice and Euclidean Distance operations will be stored in a ProjectData.txt file.
# ------------------------------------------------------------------------------------------------------------------------------------------
# Import arcpy modules =====================================================================================================
import arcpy, os
# Check out any necessary licenses =========================================================================================
arcpy.CheckOutExtension("spatial")
# Overwrite pre-existing files =============================================================================================
arcpy.env.overwriteOutput = True
# Set Geoprocessing environments============================================================================================
arcpy.env.scratchWorkspace = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb"
arcpy.env.snapRaster = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\standardext"
arcpy.env.extent = "1761299.5278283 79667.1722302657 1807369.89987345 119842.162965725"
arcpy.env.workspace = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Project.gdb"
# Local variables ==========================================================================================================
# Extent Definition
# The Standard Extent raster was used to set the environment for the Model Builder, to ensure that all outputs covered the same
# extent. The Standard Extent raster was developed from the merged DEM rasters.
standardExtent = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\standardext"
# Projection
# The original files are projected into the NAD_1983_Canada_Atlas_Lambert Projection to support robust analysis results. The projection
# must be applied at various points throught the analyiss to ensure that certain tool outputs maintain this projection.
projection = "PROJCS['NAD_1983_Canada_Atlas_Lambert',GEOGCS['GCS_North_American_1983',DATUM['D_North_American_1983',SPHEROID['GRS_1980',6378137.0,298.257222101]],PRIMEM['Greenwich',0.0],UNIT['Degree',0.0174532925199433]],PROJECTION['Lambert_Conformal_Conic'],PARAMETER['False_Easting',0.0],PARAMETER['False_Northing',0.0],PARAMETER['Central_Meridian',-95.0],PARAMETER['Standard_Parallel_1',49.0],PARAMETER['Standard_Parallel_2',77.0],PARAMETER['Latitude_Of_Origin',49.0],UNIT['Meter',1.0]]"
# Original NAD_1983_Canada_Atlas_Lambert Shape Files
mainWaterBody = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\shapefiles\\MainWaterNAD.shp"
smorgSightings = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\shapefiles\\smorgSightingNAD.shp"
stinkGeisers = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\shapefiles\\stinkGeiserNAD.shp"
foodAndWaterPods = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\shapefiles\\FoodWaterSupplyNAD.shp"
emergencyHealthPods = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\shapefiles\\emergHealthNAD.shp"
emergencyEvacuationPods = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\shapefiles\\EmergEvacNAD.shp"
# Output Direction Rasters
directionRasterMainWaterBody = "" # for Main Water Body
directionRasterSmorgSightings = "" # for Smorg Sightings
directionRasterStinkGeisers = "" # for Stink Geyser Locations
directionRasterFoodAndWaterPods = "" # for Food and Water Supply Pods
directionRasterEmergencyHealthPods = "" # for Emergency Health Pods
directionRasterEmergencyEvacuationPods = "" # for Emergency Evacuation Pods
# Euclidean Distance Output Rasters
mainWaterDistOut = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\mainWaterDist_1"
smorgSightingsDistOut = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\smorgDist_1"
stinkGeiserDistOut = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\stinkGeiserDist"
foodWaterDistOut = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\foodWaterDist_1"
emergHealthDistOut = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\emergHealthDist"
emergEvacDistOut = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\emergeEvacDist"
# Reclassification Rasters
mainWaterReclass = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\ReclassMainWater"
smorgReclass = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassSmorg"
slopeReclass = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_Slop1"
foodWaterReclass = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassFoodWater"
healthPodReclass = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassHealthPod"
emergEvacReclass = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassEmergEvac"
stinkGeiserReclass = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassStinkGeiser"
# Slope Calculation Rasters
demE = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\demenad"
demW = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\demwnad"
demenad__2_ = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\demenad"
DEMMerge = demenad__2_
demwnad__2_ = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\demwnad"
Scratch_gdb = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb"
Slope_1 = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Slope_1"
# Mask Output Rasters
mainWaterMask = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\MainWaterMask"
smorgMask = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\smorgMask"
foodWaterMask = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\foodWaterMask"
emergHealthMask = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\emergHealthMask"
emergEvacMask = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\emergEvacMask"
stinkGeiserMask = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\stinkGeiser_1"
# Suitability Rasters
SuitableAreas_3 = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\SuitableAreas_3" #same as below? can I delete the one below?
SuitableAreas_3__2_ = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\SuitableAreas_3"
suitableOnly = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\suitableOnly"
OptimalAreas = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\OptimalAreas"
optimalShuttleBase = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\OptimalShuttleBase"
optimalSettlement = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\optimalSettlement2"
# Least Cost Path Rasters
costSurface = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\costSurface"
OutputCostDistance = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\costDistChosen"
cost_bklnk = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\cost_bklnk"
costPath = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Project.gdb\\costPath"
costPathRaster = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\costPathRaster"
# Create a file to store the Project Data
file = open("projectData.txt", "w")
#Read from a the Python Project File, create a list containing all of the rasters in the file, print the filenames to the screen, then write them to the projectData.txt file
# Make a list of all raster files from the Python Project folder using the Walk function
rasterList = []
for dirpath, dirnames, filenames in arcpy.da.Walk("C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject", topdown = True, datatype="RasterDataset"):
for filename in filenames:
rasterList.append(os.path.join(dirpath,filename))
print(filenames)
file.write(filenames)
# Process: Euclidean Distance ================================================================================
# Create Lists for Euclidean Distance Variables
# Shapefile and Raster Lists
shapeFileList = [mainWaterBody, smorgSightings, stinkGeisers, foodAndWaterPods, emergencyHealthPods, emergencyEvacuationPods]
directionRasterList = [directionRasterMainWaterBody, directionRasterSmorgSightings, directionRasterStinkGeisers, directionRasterFoodAndWaterPods, directionRasterEmergencyHealthPods, directionRasterEmergencyEvacuationPods]
distanceOutputRasterList = [mainWaterDistOut, smorgSightingsDistOut, stinkGeiserDistOut, foodWaterDistOut, emergHealthDistOut, emergEvacDistOut]
cellSize = "25.411126334889" # this value corresponds to the cell size of the original DEM files.
# Euclidean Distance Loop
# Euclidean distance is a technique used to calculate a cells relationship to a source or input.
count = 1
for shapeFile in shapeFileList:
distanceOutputRaster = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\distOut_"+str(count)
directionOutputRaster = ""
arcpy.gp.EucDistance_sa(shapeFile, distanceOutputRaster, "", cellSize, directionOutputRaster)
count = count + 1
# Process: Slope Calculations =======================================================================================================================================
# Before calculating slope, the DEM files (digital elevation model files), which are given in two seperate extents (east and west), must be merged to create
# a single extent covering the full area under study. The Mosaic To New Raster tool is used to perform this task. This tool takes two or more DEM or Raster files as inputs
# and returns a single raster output. The inputs must have the same number of bands and bit depth. It is important to specify the number of bands (in this case, 1) and the pixel type
# of the output (here, 16-bit). The projection of the resulting Raster must be specified during the process. In this case, NAD 1983 Lambert_Conformal_Conic was chosen to match the existing data.
# The extent of the resulting raster must be specified after running this tool, as the output does not automatically conform to the environmental settings established for the geodatabase.
# Process: DEM Mosaic to Raster
# This process merges the two DEM files, providing a single output raster
demEW = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\demenad;C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\data\\DEM\\demwnad"
pixelType = "16_BIT_UNSIGNED"
numberOfBands = "1"
mosaicOperator = "LAST"
mosaicColormapMode = "FIRST"
DEMMerge = arcpy.MosaicToNewRaster_management(demEW, Scratch_gdb, "DEMMerge", projection, pixelType, "", numberOfBands, mosaicOperator, mosaicColormapMode)
# Process: Slope Calculation
# The slope is calculated using the DEMMerge output file, which has been projected into the NAD_1983_Canada_Atlas_Lambert system. This is important, as the NAD projection
# has a Linear Unit in meters. This affects how the user treats the 'z' factor. Since the linear unit is already in meters, one can allow the z factor to be 1. If the linear
# unit of the projection system was in feet, one would have to set the z factor to 0.3048, to convert the meter units to feet, so as to not distort the slope output.
slopeCalc = arcpy.gp.Slope_sa(DEMMerge, Slope_1, "DEGREE", "1", "PLANAR", "METER")
# Process: Extract by Mask ==========================================================================================================
# In order to ensure that the outputs cover the full extent of the area of study, and only this area, the area of interest was extracted using the Mask tool. This Spatial Analyst tool
# extracts the cells of a raster that correspond to the areas defined by a mask. The mask was set as the standardExtent raster, which was created through the merging of the two provided
# DEM files. Attributes from the input rasters are carried over. The cell values of the output lie only within the environment mask and the input mask data.
distOutputList = [mainWaterDistOut, smorgSightingsDistOut, stinkGeiserDistOut, foodWaterDistOut, emergHealthDistOut, emergEvacDistOut]
count = 1
for distFile in distOutputList:
maskOutputRaster = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\mask_"+str(count)
arcpy.gp.ExtractByMask_sa(distFile, standardExtent, maskOutputRaster)
count = count + 1
maskOutputList = [mainWaterMask, smorgMask, foodWaterMask, emergHealthMask, emergEvacMask, stinkGeiserMask, slopeCalc]
count = 1
for maskFile in maskOutputList:
reclassRaster = "C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclass_"+str(count)
arcpy.gp.Slice_sa(maskFile, reclassRaster, "10", "EQUAL_INTERVAL", "1")
count = count + 1
#Get the geoprocessing result object and add to std and mean list
getSTDResult = arcpy.GetRasterProperties_management(reclassRaster, "STD")
getSTD = getSTDResult.getOutput(0)
getMeanResult = arcpy.GetRasterProperties_management(reclassRaster, "MEAN")
getMean = getMeanResult.getOutput(0)
# #Write data to the file
# file.write(str(maskFile)+" Standard Deviation: "+getSTD+" Mean: "+getMean+"\n")
reclassRasterList = [mainWaterReclass, smorgReclass, slopeReclass, foodWaterReclass, healthPodReclass, emergEvacReclass, stinkGeiserReclass]
# Optimal Area Calculations ========================================================================================================================
# Process: Weighted Overlay Optimal Area
# The Weighted Overlay Tool is often applied to solve multicriteria problems such as this. The submodels, or features of the landscape, are scored on a scale
# set by the user, in this case, 1-10. They must first be classified into a common preference scale (see "Slice" above). They are then weighted according to
# preference, with 10 being the highest, or most prefered, and 1 being the least prefered. A value of RESTRICTED may also be used. In this case, RESTRICTED was
# applied to the areas directly on top of the main waterbody (zone 1), Smorg Habitats zones 1 and 2, and Stink Geisers (zone 1). Normally, slope values representing
# slopes too steep for building would be restricted. However, technological advancements and the low gravity on New Hope mean that, in this special case, most slope values
# are not off limits for building. The criteria are also weighted in terms of relative importance, with a total of 100% being divided amongst the criteria by the user. Here,
# Smorg habitats are given the most weight, at 29%, since this is the area we most want to avoid.
#arcpy.gp.WeightedOverlay_sa("('C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\ReclassMainWater' 15 'Value' (1 Restricted; 2 10; 3 9; 4 8; 5 7; 6 6; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_smor1' 29 'Value' (1 Restricted; 2 Restricted; 3 1; 4 2; 5 3; 6 4; 7 7; 8 8; 9 9; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_Slop1' 5 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 Restricted; 10 Restricted;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassFoodWater' 20 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassHealthPod' 12 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassEmergEvac' 10 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassStinkGeiser' 9 'Value' (1 Restricted; 2 1; 3 2; 4 3; 5 4; 6 5; 7 7; 8 8; 9 9; 10 10;NODATA NODATA));1 10 1", SuitableAreas_3)
arcpy.gp.WeightedOverlay_sa("('C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\ReclassMainWater' 15 'Value' (1 Restricted; 2 10; 3 9; 4 8; 5 7; 6 6; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_smor1' 29 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 Restricted; 10 Restricted;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_Slop1' 5 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 Restricted; 10 Restricted;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassFoodWater' 20 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassHealthPod' 12 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassEmergEvac' 10 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassStinkGeiser' 9 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 Restricted;NODATA NODATA));1 10 1", SuitableAreas_3)
##have to change weighted overlay if using SLICE instead of Reclassify, since Smorg (reclass 2) and Stink Geyser (reclass
# Process: Con
# The con tool performs a conditional if/else evaluation on each of the input cells of an input raster. The output of the con tool is a raster representing the Optimal Areas, as
# defined by the user input criteria ("VALUE = 8")
arcpy.gp.Con_sa(SuitableAreas_3, SuitableAreas_3__2_, suitableOnly, "", "VALUE = 8")
# Cost Path Calculations ========================================================================================================================
# Process: Weighted Overlay Cost Path
# In this case, the weighted overlay tool is employed to provide an input for the least cost path calculation. It is important to note that it is used
# differently here than in the optimal area calculation. Here, less desirable areas are given a higher number out of 10, while the most desirable features
# are given low values, making them less costly. **note** Using RESTRICTED in this case provides skewed results, as RESTRICTED values are assessed by the tool
# as value: -1, making this more desirable from a least cost path perspective. Instead of using RESTRICTED, the highest value, in this case, 10, was used.
#arcpy.gp.WeightedOverlay_sa("('C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\ReclassMainWater' 15 'Value' (1 10; 2 1; 3 2; 4 3; 5 4; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_smor1' 25 'Value' (1 10; 2 10; 3 9; 4 8; 5 7; 6 6; 7 5; 8 4; 9 3; 10 2;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_Slop1' 5 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassFoodWater' 24 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassHealthPod' 12 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassEmergEvac' 10 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassStinkGeiser' 9 'Value' (1 10; 2 9; 3 8; 4 7; 5 6; 6 5; 7 4; 8 3; 9 2; 10 1;NODATA NODATA));1 10 1", costSurface)
arcpy.gp.WeightedOverlay_sa("('C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\ReclassMainWater' 15 'Value' (1 10; 2 1; 3 2; 4 3; 5 4; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_smor1' 25 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\Reclass_Slop1' 5 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassFoodWater' 24 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassHealthPod' 12 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassEmergEvac' 10 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA); 'C:\\Users\\chels\\OneDrive\\Desktop\\ENVB530\\6.PythonProject\\projects\\Scratch.gdb\\reclassStinkGeiser' 9 'Value' (1 1; 2 2; 3 3; 4 4; 5 6; 6 7; 7 8; 8 9; 9 10; 10 10;NODATA NODATA));1 10 1", costSurface)
# Process: Cost Distance
arcpy.gp.CostDistance_sa(optimalSettlement, costSurface, OutputCostDistance, "", cost_bklnk, "", "", "", "", "")
# Process: Cost Path
# The cost path tool calculates the least cost path from one input (source) to another (destination). It works best with raster inputs. In this case,
# BEST_SINGLE was chosen as the calculation type. Using this method, for all cells on the input destination data, the least-cost path is derived
# from the cell with the minimum of the least-cost paths to source cells.
arcpy.gp.CostPath_sa(optimalShuttleBase, OutputCostDistance, cost_bklnk, costPath, "BEST_SINGLE", "Value")
# Process: Raster to Polyline
arcpy.RasterToPolyline_conversion(costPath, costPathRaster, "ZERO", "0", "SIMPLIFY", "VALUE")
#Close the file when done
file.close()
## END
Pending... I am still attempting to get the script to print to my file, but have not yet had success. Results will be posted here once they are obtained.
Created for Advanced GIS for Natural Resource Management, in the McGill University Department of Natural Resource Sciences, Professor Jeffrey Cardille