This is a generalization of the binomial transform of the function $f(k)$. See, for instance, the Wikipedia article on binomial transform, and, in particular, the generalizations given therein. The Prodinger reference deals specifically with your expression for $F(n)$. Or, if you rewrite it as $$F(n) = (1-p)^n \sum_{k=0}^n \binom{n}{k} \left(\frac{p}{1-p}\right)^k f(k),$$ then you having a scaled version of the rising $k$-binomial transform of $f(k)$ as described in my 2006 paper with Laura Steil. At any rate, it appears the term you want is "binomial transform," and there is a small literature on its properties.
Even in finite dimensions this definition is more convenient since it is independent of coordinates. If you are interested in geometric applications this is what you need.
This definition has the advantage that clarifies the nature of the the various invariants. Here are some more details.
A Gaussian measure on a a topological vector space is a Borel measure $\mu$ such that any continuous linear functional $\newcommand{\bR}{\mathbb{R}}$ $\newcommand{\bE}{\mathbb{E}}$$\newcommand{\bC}{\mathbb{C}}$ $\xi: V\to\bR$, viewed as a random variable has a Gaussian distribution. The Gaussian measures on $\bR$ are characterized by their mean and variance. Denote by $m(\xi)$ and respectively $V(\xi)$ the mean and respectively variance of $\xi$. Observe that the map
$$
m:V^*\to\bR,\;\; \xi \mapsto m(\xi)\in \bR
$$
is linear so it naturally lives in $V^{**}$, the bidual of $V$. With a bit of luck $m$ lands in $V\subset V^{**}$. This explains why in many applications some form of reflexivity is assumed about $V$.
As for the variance $V(-)$, note that it is defined by the covariance form
$$
C: V^*\times V^*\to \bR,\;\;C(\xi,\eta)=\bE\big[\big(\,\xi-m(\xi)\,\big)\big(\,\eta-m(\eta)\,\big)\big].
$$
The covariance form is a symmetric nonnegative definite form again on the dual $V^*$ and $V(\xi)=C(\xi,\xi)$.
The Fourier transform (a.k.a. the characteristic function) is then the function $\newcommand{\ii}{\boldsymbol{i}}$
$$
\widehat{\mu}:V^*\to\bC,\;\;\widehat{\mu}(\xi)= \bE_\mu\big[ e^{\ii \xi}\,\big]=\int_V e^{\ii \xi(v)} \mu[dv].
$$
The characteristic function $f_\xi$ of $\xi$ is
$$
f_\xi(t)= \widehat{\mu}(t\xi),\;\;t\in\bR.
$$
The book Gaussian measures by V. Bogachev adopts this point and it is worth consulting it. One source that I like very much is the fourth volume of the treatise on generalized functions by Gelfand
I. M. Gelfand, N.Ya. Vilenkin: Generalized Functions. Volume 4. Applications of harmonic Analysis.
If $V$ is finite dimensional then for any $m\in V^{**}$ and $C$ symmetric nonnegative definite form on $V^*$ there exists a unique Gaussian measure with mean $m$ and covariance form $C$. This is no longer universally true in infinite dimensions. This a rather subtle issue. Things work out nicely if $V$ is the dual of nuclear space, for example if $V=C^{-\infty}(\bR^n)$ the dual of $C^\infty_0(
\bR^n)$.
Best Answer
One way to say this is: Given any random variable $X$, there is a sequence of random variables $X_n$ whose distributions are finite mixtures of Gaussians, such that $X_n \Rightarrow X$ (i.e. $X_n$ converges to $X$ in distribution, or weakly). I don't believe you need to consider infinite mixtures per se.
It is true. First, note that any constant random variable can certainly be approximated in distribution by Gaussians (just let the variance tend to 0, or maybe you already consider constants to be Gaussian). But any random variable can be approximated in distribution by a mixture of constants. (Approximate the cdf of $X$ by step functions.)
In functional analysis language, one might say that in the Banach space $\mathcal{M}(\mathbb{R})$ of finite signed (or complex) measures on $\mathbb{R}$, the weak-* closed convex hull of the Gaussian probability measures contains all the probability measures. Equivalently, the convex combinations (i.e. mixtures) of the Gaussian probability measures are weak-* dense in the probability measures.