* All scripts in this process are run in Python 3.3.0
master.py is launched from the server task manager on OKCPUB01. It runs continuously and spawns sub-processes.
asset_alive_watchdog.py monitors...
watchdog.py keeps track of master.py by reading a timestamp that master.py writes to the database once every time a sub-process is spawned. Sub-processes sare always spawning, and if a lapse of time greater than 30 minutes occurs between sub-processes watchdog.py will send an email to alert the appropriate email group when the last time master.py spawned a sub-process. It will email on the hour every hour until master.py spawns a new process or the task is killed. It also runs on the server as a scheduled task.
(gps listener currently does not work due to a port conflict issue that has not yet been resolved) gps_listener.py does exactly what the name implies. It listens to the cell connections and records data from the last time they sent GPS coordinates.
dispatch.py spawns a new process for each cell unit in the field. If the unit has a connection, the process pulls the data and updates the curval and value tables, which respectively represent the current value and the history. If there is no connection to the unit, the SQL connection is closed, and the process terminates. The child status of the device is set to true only when the process that references the device is running. At any other time the child status is false. This ensures that the device is always able to enter into a sub-process; whether it had a previous connection or not.
dispatch.py sub-processes are named after the unit type. There is currently only one cell unit type, and that is represented by fx10_r_3.py. New sub-process scripts will be named after their respective unit types, and will be called on based off of the unit type in dispatch.py. If alerts are determined here, the database is updated and an alert is placed in the queue.
* All scripts in this process are run in Python 2.7.3
Windows task scheduler monitors these scripts on OKCPUB01.
satellite_pull.py runs every ten minutes or so to pull in data from SatAlarm. This is done with a http request. Once the request is validated with credentials sent via XML, a new XML string is sent to satellite_pull.py and parsed to a temporary file. This temporary file is then reopened and the data structure within is parsed and sent to the database. This process is similar to a sub-process of dispatch.py, as it updates the same tables. Once the temporary file is done being parsed, it copies the file and renames it in a new directory for archiving. If an alert is determined, this script will update the queue in the database.
alert.py is run every two minutes to check the queue and determine if alerts need to be sent out. If the criteria are met, the alerts are sent and archived into a history table.
sacda_morning_report.py is run every morning. It sends a text or texts to the respective mechanic(s) that sign up for the service that displays the last known status of the unit.