You're correct that the supervised learner can often be redundant with TF-IDF weighting. Here's the basic outline of why: In one typical form of TF-IDF weighting, the rescaling is logarithmic, so the weighting for a word $w$ in a document $d$ is
$$
\text{TF-IDF}(w,d) = (\text{no. occurrences of $w$ in $d$}) \cdot f(w)
$$ for $N$ the number of documents in the corpus and $f(w)=\log\left(\frac{N}{\text{no. documents containing $w$}}\right)$. When $f(w)>0$, TF-IDF just amounts to a rescaling of the term frequency. So if we write the matrix counting the number of occurrences of a word in each document as $X$, then a linear model has the form $X\beta$. If we use TF-IDF instead of just term frequency alone, the linear model can be written as $X(k I)\tilde{\beta}$, where $k$ is a vector storing all of our weights $k_i=f(w_i)$. The effect of $kI$ is to rescale each column of $X$. In this setting, the choice to use TF-IDF or TF alone is inconsequential, because you'll get the same predictions. Using the substitution $(kI)\tilde{\beta}=\beta$, we can see the effect is to rescale $\beta$.
But there are at least two scenarios where the choice to use TF-IDF is consequential for supervised learning.
The first case is when $f(w)=0$. This happens whenever a term occurs in every document, such as very common words like "and" or "the." In this case, TF-IDF will zero out the column in $X(kI)$, resulting in a matrix which is not full-rank. A rank-deficient matrix is often not preferred for supervised learning, so instead these words are simply dropped from $X$ because they add no information. In this way, TF-IDF provides automatic screening for the most common words.
The second case is when the matrix $X(kI)$ has its document vectors rescaled to the same norm. Since a longer document is very likely to have a much larger vocabulary than a shorter document, it can be hard to compare documents of different lengths. Rescaling each document vector will also suppress importance rare words in the document independently of how rare or common the word is in the corpus. Moreover, rescaling each document's vector to have the same norm after computing TF-IDF gives a design matrix which is not a linear transformation of $X$, so original matrix cannot be recovered using a linear scaling.
Rescaling the document vectors has a close connection to cosine similarity, since both methods involve comparing unit-length vectors.
The popularity of TF-IDF in some settings does not necessarily impose a limitation on the methods you use. Recently, it has become very common to use word and token vectors that are either pre-trained on a large corpus or trained by the researcher for their particular task. Depending on what you're doing and scale of the data, and the goal of your analysis, it might be more expedient to use TD-IDF, word2vec, or another method to represent natural language information.
A number of resources can be found here, which I reproduce for convenience.
K. Sparck Jones. "A statistical interpretation of term specificity
and its application in retrieval". Journal of Documentation, 28 (1).
1972.
G. Salton and Edward Fox and Wu Harry Wu. "Extended Boolean
information retrieval". Communications of the ACM, 26 (11). 1983.
G. Salton and M. J. McGill. "Introduction to modern information
retrieval". 1983
G. Salton and C. Buckley. "Term-weighting approaches in automatic
text retrieval". Information Processing & Management, 24 (5). 1988.
H. Wu and R. Luk and K. Wong and K. Kwok. "Interpreting TF-IDF term
weights as making relevance decisions". ACM Transactions on
Information Systems, 26 (3). 2008.
One simple technique that seems to work reasonably well for short texts (e.g., a sentence or a tweet) is to compute the vector for each word in the document, and then aggregate them using the coordinate-wise mean, min, or max.
Based on results in one recent paper, it seems that using the min and the max works reasonably well. It's not optimal, but it's simple and about as good or better as other simple techniques. In particular, if the vectors for the $n$ words in the document are $v^1,v^2,\dots,v^n \in \mathbb{R}^d$, then you compute $\min(v^1,\dots,v^n)$ and $\max(v^1,\dots,v^n)$. Here we're taking the coordinate-wise minimum, i.e., the minimum is a vector $u$ such that $u_i = \min(v^1_i, \dots, v^n_i)$, and similarly for the max.
The feature vector is the concatenation of these two vectors, so we obtain a feature vector in $\mathbb{R}^{2d}$. I don't know if this is better or worse than a bag-of-words representation, but for short documents I suspect it might perform better than bag-of-words, and it allows using pre-trained word embeddings.
TL;DR: Surprisingly, the concatenation of the min and max works reasonably well.
Reference:
Representation learning for very short texts using weighted word embedding aggregation. Cedric De Boom, Steven Van Canneyt, Thomas Demeester, Bart Dhoedt. Pattern Recognition Letters; arxiv:1607.00570. abstract, pdf. See especially Tables 1 and 2.
Credits: Thanks to @user115202 for bringing this paper to my attention.
Best Answer
While it's possible to combine word embeddings using weighted average or a concatenation of min / max values across word vectors as described in this post, the output vector loses semantic information. A better alternative is to train a doc2vec model which is an extension of word2vec that uses paragraph vectors as part of the context during training:
The word vectors in doc2vec are shared across all paragraphs while the paragraph vectors are unique to each paragraph. The doc2vec model is implemented in gensim. See the following ipython notebook for an example and this quora post for additional explanation of the doc2vec model.