This is only really a problem if you compute the precision and recall first, then plug them in.
One can also compute the $F_1$ score as
$$F_1 = \frac{2 \cdot \textrm{True Positive}}{2 \cdot \textrm{True Positive} + \textrm{False Positive} + \textrm{False Negative}}$$
Plugging in your numbers, you'll arrive at an $F_1$score of zero, which seems appropriate since your classifier is just guessing the majority class.
There is an information-theoretic measure called proficiency that might be of interest if you are working on fairly unbalanced data sets. The idea is that you want it to remain sensitive to both classes as either the number of true positives or negatives approaches zero. It's essentially $$
\frac{I(\textrm{predicted labels}; \textrm{actual labels})}{H(\textrm{actual labels)}}$$
See pages 5--7 of White et al. (2004) for more details about its calculation and interpretation
Say the dataset is composed of $N$ and $P$, negative and positive, respectively, with $|P| = \frac{1}{9} |N|$ in the dataset, but with the true-life ratio being $|P| = \alpha |N|$ for some $\alpha > \frac{1}{9}$ (e.g., $\alpha=1$ means that, in real life, positives and negatives are approximately as frequent).
Partition the negative samples into two parts, $N_1$, $N_2$, s.t. $|N_2| = \alpha |P|$.
For example, in the following figure, $N_1, N_2, P$ are the parts in blue, cyan, grey, respectively.
To perform 3-fold CV, for example, partition each of the parts into 3. The first fold, for example, would consist of the top 2 blue and top 2 gray for train, and the bottom 1 cyan and bottom 1 gray for test.
Note that the test set is $\alpha$ balanced. In the train set, you'd use SMOTE to $\alpha$ balance as you're doing now.
(Of course, you can adapt this to other methods besides k-fold.)
Unfortunately, for the percentages you mention, the test will probably be relatively noisy (unless your dataset is large). Personally, I don't see a way around that.
Best Answer
The answer to the title question is "of course it does"; you are shifting the distribution toward the minority class.
You can shift your model's predictions back to match the original distribution, see e.g. Convert predicted probabilities after downsampling to actual probabilities in classification or, equivalently, adjust the prediction threshold.
There's also a serious question on whether you needed to resample in the first place. See What is the root cause of the class imbalance problem?, When is unbalanced data really a problem in Machine Learning? If you do get better performance after balancing, with correct use of prediction thresholds/shifting, I'd like to know about it. I haven't been able to find a definitive answer on whether balancing helps a classifier learn. (Henry's answer to the second linked question here suggests not, but...)