Explain K-Fold cross validation with code.

873    Asked by NaveenYadav in Data Science , Asked on Nov 30, 2019
Answered by Naveen Yadav

Cross-validation is a method which is used to estimate the performance of a model.

K- Fold is used to evaluate the model on a limited data sample.

The procedure works on a single parameter called k which refers to the number of groups in which a given data sample is to be split. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation.

To implement the same in Python, we can use the following code

# Applying k-Fold Cross Validation

from sklearn.model_selection import cross_val_score

accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)

accuracies.mean()

accuracies.std()

To implement the same in R, we can use the following code.

# Applying k-Fold Cross Validation

# install.packages('caret')

library(caret)

folds = createFolds(training_set$Purchased, k = 10)

cv = lapply(folds, function(x) {

  training_fold = training_set[-x, ]

  test_fold = training_set[x, ]

  classifier = svm(formula = Purchased ~ .,

                   data = training_fold,

                   type = 'C-classification',

                   kernel = 'radial')

  y_pred = predict(classifier, newdata = test_fold[-3])

  cm = table(test_fold[, 3], y_pred)

  accuracy = (cm[1,1] + cm[2,2]) / (cm[1,1] + cm[2,2] + cm[1,2] + cm[2,1])

  return(accuracy)

})

accuracy = mean(as.numeric(cv))



Your Answer

Interviews

Parent Categories