Manage Model

This page is where you manage a specific model. You can run it, stop it, download it, look at the results etc.

This page has two different views. One for when K=10 is chosen and one for when K=1 is selected when specifying the model.

K=10 View

This view summarizes all you need to know to evaluate and recreate this model.

The source, target variable, input variables and number of people in the model is displayed at the top of this form.

To the right are 6 action buttons. The actions are as follows:

  • Run Calculation: Press this button to update the calculation.
  • Download Results PDF: Puts all these results in a pdf and downloads it to your computer
  • Stop Model: If you don't want the model to run again then press STOP. Also to apply the model to the data set then it needs to be STOPPED.
  • Apply Model: Runs the prediction rules over the data and adds a new variable (Predicted Var) containing the estimate for each case.
  • Save Model: Saves all the parameters associated with a model.
  • Download Data: Downloads the data used to create the model as csv or xml.

At the bottom of this view we have the results. The first section contains the model quality results:

  • Accuracy: % of sample that true positives and negatives predicted by the model
  • Quality: This is the F-Measure = 2 * Precision * Recall / (Precision + Recall).
  • Precision: The % of all predicted positives that are true positives.
  • Recall: True Positives / (True Positives + False Negatives)
  • Threshold Value: This is the optimum predicted value that maximizes accuracy.
  • Contribution Chart: This is the amount of information that knowing this data explains. It is the share of total conditional entropy contributed by this variable.

K=1 View

This view is similar to the K=10 view with a few differences in the results displayed. The action buttons and information displayed are exactly the same as with the K=10 view.

Instead of Accuracy/Quality/Recall/Precision scores a K=1 model just displays the model contract score.

This score is calculated by working out the average predicted propensity scores that are greater than 50% and subtracting from that score the average predict propensity scores that are below 50%.

The distribution of these propensity scores is shown in the chart on the right whereas the chart on the left displays the question contributions to the predictions.

Return to Manage Models.