1. You can report anything you like as long as you report an estimate obtained by cross-validation or using an independent test set. You can fine-tune a classifier on the training set, but then its accuracy measured on the same set is biased up.
2. Sure, you get a different accuracy by using a different threshold for assigning into the positive class.
3. All loss methods for classifiers return by default the classification error, not the mean squared error. This is stated in many places in the doc.
4. You have code in your post to obtain a ROC by from resubstitution predictions. Just replace resubPredict with kfoldPredict.
5. Any estimate of classification performance should be obtained using data not used for training. Otherwise the estimate is optimistic. For simple models like LDA, the optimistic bias may be small. Yet it's there.
Best Answer