What is cross-validation?

Cross validation is when part of the sample is used to train a model and the remainder is used to test it.

When you select K=10 in the defining a model, the application will run 10 cross-validations to build a model. The sample is split into chunks of 10% of the total. Each of these is a Test cell. For each Test Cell we take the remainder of the sample (90% in each case) and train a model using KL's analytics engine.

Once the model is trained it is applied to the Test cell. This is repeated for all Test cells and then the model performance is calculated only looking at these 10 Test Cells.

When K=1, no cross validation takes place. The entire data set is used to train the model and then test the model. This is quicker but means that the model is prone to overfitting and biases. When K=10 the model tends to be more generalized and more resilient to outliers and biases.