Computing loss on test data and computing loss by cross-validation are two separate tasks. To compute loss on test data, you need to train an ensemble using all training data you have. To compute loss by say 10-fold validation, you need to grow 10 ensembles, each on 9/10 of your training data and then average loss over the left-out 1/10 parts. I am not sure why you expect that both tasks would be handled by one object. Even if they were handled by one object, it would not buy you anything in terms of CPU time or memory. Just do two separate things: 1) Grow an ensemble on all training data and use it to compute the test loss, and 2) Cross-validate this ensemble using its crossval method and use the kfoldLoss method of the partitioned ensemble (new object) to compute the cross-validated loss.
Best Answer