There are several components to your question - but first I would ask why is your sample so skewed? You have an under-sampled training set which as you point out is odd. Can you assume that the two classes were sampled randomly from the population? If not, that is your most serious problem and potentially not something you can recover from. The best you can do is build a model, calibrate it and then test it in a pilot on the population.
Assuming representative samples, the issues are:
1) Will this imbalance keep the classifier from properly discriminating between classes? Maybe. You must cross validate any resulting model so this should be testable and you may find you need over sample the negative cases to get the data set into balance. It depends on the type of classifier being used and the data. If you are using random forests or GBM I might not be concerned. If you are using a single decision tree, i would.
2) Will the predicted probabilities from the model align to the population. The answer is no. If this is important to your application (i.e. the model must be well calibrated and not just concerned with ranking or separating the classes) it is a problem but can be overcome. Any time a training data set is used where the class density does not match the population, the resulting probabilities of class membership will be biased. Here is a general purpose way to re-calibrate them:
LINK
Say the dataset is composed of $N$ and $P$, negative and positive, respectively, with $|P| = \frac{1}{9} |N|$ in the dataset, but with the true-life ratio being $|P| = \alpha |N|$ for some $\alpha > \frac{1}{9}$ (e.g., $\alpha=1$ means that, in real life, positives and negatives are approximately as frequent).
Partition the negative samples into two parts, $N_1$, $N_2$, s.t. $|N_2| = \alpha |P|$.
For example, in the following figure, $N_1, N_2, P$ are the parts in blue, cyan, grey, respectively.
To perform 3-fold CV, for example, partition each of the parts into 3. The first fold, for example, would consist of the top 2 blue and top 2 gray for train, and the bottom 1 cyan and bottom 1 gray for test.
Note that the test set is $\alpha$ balanced. In the train set, you'd use SMOTE to $\alpha$ balance as you're doing now.
(Of course, you can adapt this to other methods besides k-fold.)
Unfortunately, for the percentages you mention, the test will probably be relatively noisy (unless your dataset is large). Personally, I don't see a way around that.
Best Answer
Imbalance is not necessarily a problem, but how you get there can be. It is unsound to base your sampling strategy on the target variable. Because this variable incorporates the randomness in your regression model, if you sample based on this you will have big problems doing any kind of inference. I doubt it is possible to "undo" those problems.
You can legitimately over- or under-sample based on the predictor variables. In this case, provided you carefully check that the model assumptions seem valid (eg homoscedasticity one that springs to mind as important in this situation, if you have an "ordinary" regression with the usuals assumptions), I don't think you need to undo the oversampling when predicting. Your case would now be similar to an analyst who has designed an experiment explicitly to have a balanced range of the predictor variables.
Edit - addition - expansion on why it is bad to sample based on Y
In fitting the standard regression model $y=Xb+e$ the $e$ is expected to be normally distributed, have a mean of zero, and be independent and identically distributed. If you choose your sample based on the value of the y (which includes a contribution of $e$ as well as of $Xb$) the e will no longer have a mean of zero or be identically distributed. For example, low values of y which might include very low values of e might be less likely to be selected. This ruins any inference based on the usual means of fitting such models. Corrections can be made similar to those made in econometrics for fitting truncated models, but they are a pain and require additional assumptions, and should only be employed whenm there is no alternative.
Consider the extreme illustration below. If you truncate your data at an arbitrary value for the response variable, you introduce very significant biases. If you truncate it for an explanatory variable, there is not necessarily a problem. You see that the green line, based on a subset chosen because of their predictor values, is very close to the true fitted line; this cannot be said of the blue line, based only on the blue points.
This extends to the less severe case of under or oversampling (because truncation can be seen as undersampling taken to its logical extreme).