You are very close to understanding k-fold cross-validation. To answer your questions in turn.
1. So to use k-fold cross validation the required data is the labeled data?
Yes, you must have some 'known' result in order for your model to be trained on the data. You are building a model, I assume, to predict some sort of outcome either regression or classification. In order to do so, a model must be built on data to explain some known result.
2. How about non labeled data?
For k-fold cross-validation, you will have split your data into k groups (e.g. 10). You then select one of those groups and use the model (built from your training data) to predict the 'labels' of this testing group. Once you have your model built and cross-validated, then it can be used to predict data that don't currently have labels. The cross-validation is a means to prevent overfitting.
As a last clarification, you aren't only using 1 of the 10 groups. Let's say you had 100 samples. You split it into groups 1-10, 11-20, ... 91-100. You would first train on all the groups from 11-100 and predict the test group 1-10. Then you would repeat the same analysis on 1-10 and 21-100 as the training and 11-20 as the testing group and so forth. The results typically averaged at the end.
As a simple example say I have the following abbreviated data (binary classification):
Label Variable
A 0.354
A 0.487
A 0.384
A 0.395
A 0.436
B 0.365
B 0.318
B 0.327
B 0.381
B 0.355
Let's say I want to do 10-fold cross-validation on this (nearly Leave-One-Out cross-validation in this case)
My first testing group will be:
A 0.354
A 0.487
My training set is the remaining data. See how the labels are present in both groups?
A 0.384
A 0.395
A 0.436
B 0.365
B 0.318
B 0.327
B 0.381
B 0.355
Please note that it is also best practice to randomize the grouping, this is purely for demonstration
Then you fit your model to the training set, which is using the variable(s) to best explain the labels (class A or B). The model that has been fit to this training set is then used to predict the testing dataset. You remove the labels from the testing set and predict them using the trained model. You then compare the predicted labels to the actual labels. This is repeated for all 10-folds and the results averaged.
Once everything is completed and you have your wonderfully cross-validated model, you can use it to predict unlabeled data and have some sort measure of confidence in your results.
Extended for Parameter Tuning
Let's say you are tuning a partial least squares (PLS) model (it doesn't matter if you don't know what this is for demonstration purposes). I would like determine how many components (the tuning parameter) I should have in the model. I would like to test 2,3,4, and 5 components and see how many I should use to maximize my predictive accuracy without overfitting the model. I would conduct the entire cross-validation series for each component number. Each iteration of the CV would be averaged (the average predictive accuracy of the entire analysis).
Assuming classification accuracy is your metric let's say these are my results (completely made up here):
2 components: 70%
3 components: 82%
4 components: 78%
5 components: 74%
Clearly, I would then choose 3 components for my model which has now been cross-validated to avoid overfitting and maximizing predictive accuracy. I can then use this optimized model to predict a new dataset where I don't know the labels.
There's nothing wrong with the (nested) algorithm presented, and in fact, it would likely perform well with decent robustness for the bias-variance problem on different data sets. You never said, however, that the reader should assume the features you were using are the most "optimal", so if that's unknown, there are some feature selection issues that must first be addressed.
FEATURE/PARAMETER SELECTION
A lesser biased approached is to never let the classifier/model come close to anything remotely related to feature/parameter selection, since you don't want the fox (classifier, model) to be the guard of the chickens (features, parameters). Your feature (parameter) selection method is a $wrapper$ - where feature selection is bundled inside iterative learning performed by the classifier/model. On the contrary, I always use a feature $filter$ that employs a different method which is far-removed from the classifier/model, as an attempt to minimize feature (parameter) selection bias. Look up wrapping vs filtering and selection bias during feature selection (G.J. McLachlan).
There is always a major feature selection problem, for which the solution is to invoke a method of object partitioning (folds), in which the objects are partitioned in to different sets. For example, simulate a data matrix with 100 rows and 100 columns, and then simulate a binary variate (0,1) in another column -- call this the grouping variable. Next, run t-tests on each column using the binary (0,1) variable as the grouping variable. Several of the 100 t-tests will be significant by chance alone; however, as soon as you split the data matrix into two folds $\mathcal{D}_1$ and $\mathcal{D}_2$, each of which has $n=50$, the number of significant tests drops down. Until you can solve this problem with your data by determining the optimal number of folds to use during parameter selection, your results may be suspect. So you'll need to establish some sort of bootstrap-bias method for evaluating predictive accuracy on the hold-out objects as a function of varying sample sizes used in each training fold, e.g., $\pi=0.1n, 0.2n, 0,3n, 0.4n, 0.5n$ (that is, increasing sample sizes used during learning) combined with a varying number of CV folds used, e.g., 2, 5, 10, etc.
OPTIMIZATION/MINIMIZATION
You seem to really be solving an optimization or minimization problem for function approximation e.g., $y=f(x_1, x_2, \ldots, x_j)$, where e.g. regression or a predictive model with parameters is used and $y$ is continuously-scaled. Given this, and given the need to minimize bias in your predictions (selection bias, bias-variance, information leakage from testing objects into training objects, etc.) you might look into use of employing CV during use of swarm intelligence methods, such as particle swarm optimization(PSO), ant colony optimization, etc. PSO (see Kennedy & Eberhart, 1995) adds parameters for social and cultural information exchange among particles as they fly through the parameter space during learning. Once you become familiar with swarm intelligence methods, you'll see that you can overcome a lot of biases in parameter determination. Lastly, I don't know if there is a random forest (RF, see Breiman, Journ. of Machine Learning) approach for function approximation, but if there is, use of RF for function approximation would alleviate 95% of the issues you are facing.
Best Answer
Holdout is essentially a 2-fold cross validation. If you perform k-fold cross validation correctly, no extra holdout set is necessary.
Make sure that your predictors are chosen based on the test sets (and not in advance on all the samples). You may also want to think about stratification if appropriate.