If you have R2012b or later, use the RUSBoost algorithm. It is recommended for imbalanced datasets.
If you go with GentleBoost, you need to optimize the tree complexity and the number of trees in the ensemble. (You could also play with the learning rate.) Both parameters are likely far off their optimal values in your code.
First, fitensemble for GentleBoost by default produces decision stumps (trees with two leaves). Since the minority class is only 8% of the data, stumps are not sensitive to observations of the minority class. I often set the minimal leaf size to one half of the size of the minority class. The optimal setting for the leaf size may not be exactly that but should be in that ballpark. Do:
tmp = ClassificationTree.template('minleaf',some_number);
ens = fitensemble(Xtrain,Ytrain,'GentleBoost',Ntrees,tmp,'prior','uniform')
Second, 10 trees are most usually not enough. Inspect the ensemble accuracy by cross-validation or using an independent test set to decide how many trees are needed. Typically, a few hundred should be used for boosting.
Also, after you train the ensemble, don't just look at the classification error. Use the perfcurve function to compute a performance curve and find the optimal threshold on the classification score. It is up to you to define what "optimal" means. You can assign, for instance, different misclassification costs to the two classes and find the threshold minimizing the expected cost.
.....
Best Answer
You got off on the wrong foot by conceptualizing this as a classification problem. The fact that $Y$ is binary has nothing to do with trying to make classifications. And when the balance of $Y$ is far from 1:1 you need to think about modeling tendencies for $Y$, not modeling $Y$. In other words, the appropriate task is to estimate $P(Y=1 | X)$ using a model such as the binary logistic regression model. The logistic model is a direct probability estimator. Details may be found here and here.
Once you have a validated probability model and a utility/cost/loss function you can generate optimum decisions. The probabilities help to trade off the consequences of wrong decisions.