Give this a try (modify the details as needed)
library(caret)
library(mlbench)
data(Sonar)
set.seed(1)
splits <- createFolds(Sonar$Class, returnTrain = TRUE)
results <- lapply(splits,
function(x, dat) {
holdout <- (1:nrow(dat))[-unique(x)]
data.frame(index = holdout,
obs = dat$Class[holdout])
},
dat = Sonar)
mods <- vector(mode = "list", length = length(splits))
## foreach or lapply would do this faster
for(i in seq(along = splits)) {
in_train <- unique(splits[[i]])
set.seed(2)
mod <- train(Class ~ ., data = Sonar[in_train, ],
method = "svmRadial",
preProc = c("center", "scale"),
tuneLength = 8)
results[[i]]$pred <- predict(mod, Sonar[-in_train, ])
mods[[i]] <- mod
}
lapply(results, defaultSummary)
There's nothing wrong with the (nested) algorithm presented, and in fact, it would likely perform well with decent robustness for the bias-variance problem on different data sets. You never said, however, that the reader should assume the features you were using are the most "optimal", so if that's unknown, there are some feature selection issues that must first be addressed.
FEATURE/PARAMETER SELECTION
A lesser biased approached is to never let the classifier/model come close to anything remotely related to feature/parameter selection, since you don't want the fox (classifier, model) to be the guard of the chickens (features, parameters). Your feature (parameter) selection method is a $wrapper$ - where feature selection is bundled inside iterative learning performed by the classifier/model. On the contrary, I always use a feature $filter$ that employs a different method which is far-removed from the classifier/model, as an attempt to minimize feature (parameter) selection bias. Look up wrapping vs filtering and selection bias during feature selection (G.J. McLachlan).
There is always a major feature selection problem, for which the solution is to invoke a method of object partitioning (folds), in which the objects are partitioned in to different sets. For example, simulate a data matrix with 100 rows and 100 columns, and then simulate a binary variate (0,1) in another column -- call this the grouping variable. Next, run t-tests on each column using the binary (0,1) variable as the grouping variable. Several of the 100 t-tests will be significant by chance alone; however, as soon as you split the data matrix into two folds $\mathcal{D}_1$ and $\mathcal{D}_2$, each of which has $n=50$, the number of significant tests drops down. Until you can solve this problem with your data by determining the optimal number of folds to use during parameter selection, your results may be suspect. So you'll need to establish some sort of bootstrap-bias method for evaluating predictive accuracy on the hold-out objects as a function of varying sample sizes used in each training fold, e.g., $\pi=0.1n, 0.2n, 0,3n, 0.4n, 0.5n$ (that is, increasing sample sizes used during learning) combined with a varying number of CV folds used, e.g., 2, 5, 10, etc.
OPTIMIZATION/MINIMIZATION
You seem to really be solving an optimization or minimization problem for function approximation e.g., $y=f(x_1, x_2, \ldots, x_j)$, where e.g. regression or a predictive model with parameters is used and $y$ is continuously-scaled. Given this, and given the need to minimize bias in your predictions (selection bias, bias-variance, information leakage from testing objects into training objects, etc.) you might look into use of employing CV during use of swarm intelligence methods, such as particle swarm optimization(PSO), ant colony optimization, etc. PSO (see Kennedy & Eberhart, 1995) adds parameters for social and cultural information exchange among particles as they fly through the parameter space during learning. Once you become familiar with swarm intelligence methods, you'll see that you can overcome a lot of biases in parameter determination. Lastly, I don't know if there is a random forest (RF, see Breiman, Journ. of Machine Learning) approach for function approximation, but if there is, use of RF for function approximation would alleviate 95% of the issues you are facing.
Best Answer
Nobody ever reads the documentation :-/
The package vignette for feature selection had all the details. They can know be found at:
http://caret.r-forge.r-project.org/featureselection.html
in Algorithm #2.
In your case, you have inner resampling to tune the SVM at each iteration (line 2.9 if Algo #2) and an external one to evaluate the number of predictors (line 2.1).
Why does it do this? With small to moderate numbers of instances, a simple partition to a single test set does a very poor job of estimating performance and may very well over-fit to the predictors. [1] concisely summarize this point: ``hold--out samples of tolerable size [...] do not match the cross--validation itself for reliability in assessing model fit and are hard to motivate''.
I would advise reading [2], which reflects how difficult validating feature selection can be. If you have a lot of data, perhaps a single test set would be sufficient.
One other note: you don't show what
svmFuncs
is exactly, so I don't know how you are estimating variable importance. If you are using the default method, it does the analysis for each predictor independently so usingrerank = TRUE
is a waste of time (i.e the values will be the same at each calculation).Max
[1] Hawkins, D. M., Basak, S. C., & Mills, D. (2003). Assessing Model Fit by Cross-Validation. Journal of Chemical Information and Modeling, 43(2), 579–586. doi:10.1021/ci025626i
[2] Ambroise, C., & McLachlan, G. (2002). Selection bias in gene extraction on the basis of microarray gene-expression data. Proceedings of the National Academy of Sciences, 99(10), 6562–6566.