I think you want to write these as random variables:
$$
X_i \sim \text{Binom}\left(n=3,~p=\frac14\right)
\qquad
\text{for}
\qquad
1 \le i \le 4,
$$
so that the mean is
$E[X_i]=np=\frac34$
and the variance is
$\text{Var}(X_i)=np(1-p)=\frac9{16}$.
So for $W = 2X_1 - 3X_2 - 2X_4$,
we have mean
$$
E[W] = 2E[X_1] - 3E[X_2] - 2E[X_4] = \left(2-3-2\right) \frac34 = -\frac94
$$
and variance
$$
\text{Var}(W)=(2^2+3^3+2^2)\text{Var}(X_i)=17\cdot\frac9{16}=\frac{153}{16},
$$
which follow from the facts that
(for any $X,Y$ for which the quantities below are defined)
$$
\eqalign{
&E[aX+bY] = a \, E[X] + b \, E[Y]\\
&\text{Var}(aX+bY) = a^2\text{Var}(X)+b^2\text{Var}(Y)+2ab \, \text{Cov}(X,Y)\\
}
$$
and that the $X_i$ are all pairwise independent, and so all the covariances
$\text{Cov}(X_i,Y_j)=0$.
In the case of $Y=\sum_{i=1}^4 X_i$,
you should therefore get
$E[Y]=\sum_{i=1}^4 E[X_i]=4\cdot\frac34$
and
$\text{Var}(Y)=\sum_{i=1}^4 1^2\cdot\text{Var}(X_i)=4\cdot\frac9{16}$.
The cumulative distribution function is not really useful here; it doesn't directly apply to the sum of four different (independent) instances of a random variable.
In general, if $p_1, p_2, \ldots, p_m \in (0,1)$ are IID realizations from some probability distribution with mean $\mu_p$ and standard deviation $\sigma_p$, and $n_1, n_2, \ldots, n_m \in \mathbb Z^+$ are IID realizations of from another probability distribution with mean $\mu_n$ and standard deviation $\sigma_n$, and for each $i = 1, 2, \ldots, m$, we have random variables $$X_i \sim \operatorname{Binomial}(n_i, p_i),$$ and we are interested in the distribution of $S = \sum_{i=1}^m X_i$, then we have by linearity of expectation $$\operatorname{E}[S] = \sum_{i=1}^m \operatorname{E}[X_i].$$ In turn, for each $X_i$, we have by the law of total expectation $$\operatorname{E}[X_i] = \operatorname{E}[\operatorname{E}[X_i \mid (n_i \cap p_i)]] = \operatorname{E}[n_i p_i] = \operatorname{E}[n_i]\operatorname{E}[p_i] = \mu_n \mu_p;$$ thus $$\operatorname{E}[S] = m\mu_n \mu_p.$$ This assumes that $n_i$ and $p_i$ are independent for each $i$ (from which it follows that each $X_i$ is independent). The variance calculation is done in a similar fashion; $$\operatorname{Var}[S] \overset{\text{ind}}{=} \sum_{i=1}^m \operatorname{Var}[X_i],$$ whence by the law of total variance
$$\begin{align*}
\operatorname{Var}[X_i]
&= \operatorname{Var}[\operatorname{E}[X_i \mid (n_i \cap p_i)]] + \operatorname{E}[\operatorname{Var}[X_i \mid (n_i \cap p_i)]] \\
&= \operatorname{Var}[n_i p_i] + \operatorname{E}[n_i p_i (1-p_i)] \\
&= (\sigma_n^2 \sigma_p^2 + \sigma_n^2 \mu_p^2 + \sigma_p^2 \mu_n^2) + \mu_n \operatorname{E}[p_i(1-p_i)] \\
&= (\sigma_n^2 \sigma_p^2 + \sigma_n^2 \mu_p^2 + \sigma_p^2 \mu_n^2) + \mu_n (\mu_p - (\sigma_p^2 + \mu_p^2)).
\end{align*}$$
To understand the variance of $n_i p_i$, note that for two independent random variables $A$, $B$, with means and standard deviations $\mu_A, \sigma_A, \mu_B, \sigma_B$, respectively,
$$\begin{align*}\operatorname{Var}[AB]
&= \operatorname{E}[(AB)^2] - \operatorname{E}[AB]^2 \\
&= \operatorname{E}[A^2 B^2] - \operatorname{E}[A]^2 \operatorname{E}[B]^2 \\
&= \operatorname{E}[A^2]\operatorname{E}[B^2] - \mu_A^2 \mu_B^2 \\
&= (\operatorname{Var}[A] + \operatorname{E}[A]^2)(\operatorname{Var}[B] + \operatorname{E}[B]^2) - \mu_A^2 \mu_B^2 \\
&= (\sigma_A^2 + \mu_A^2)(\sigma_B^2 + \mu_B^2) - \mu_A^2 \mu_B^2 \\
&= \sigma_A^2 \sigma_B^2 + \sigma_A^2 \mu_B^2 + \sigma_B^2 \mu_A^2. \end{align*}$$
Note that my computation of the variance differs from yours. I have substantiated my results by simulating $m = 10^6$ observations from $X_i$ where $n_i \sim \operatorname{Poisson}(\lambda)$ and $p_i \sim \operatorname{Beta}(a,b)$, for $\lambda = 11$ and $(a,b) = (7,3)$. This should result in $\operatorname{Var}[X_i] = 1001/100$; your results do not match. I should also point out that the reason that your computation does not work is because the total variance of each $X_i$ is not merely due to the expectation of the conditional variance of $X_i$ given $n_i$ and $p_i$; the other term in the law of total variance must also be included, which captures the variability of the conditional expectation of $X_i$. In other words, there is variation in $X_i$ coming from the binomial variance even when $n_i$ and $p_i$ are fixed, but there is also additional variation in the location of $X_i$ arising from the fact that $n_i$ and $p_i$ are not fixed.
Best Answer
Guide:
Great, you have solved the parameters.
Now, you just have to use the formula
$$Pr(X=i) = \begin{cases} \binom{6}{i}\left( \frac13 \right)^i \left( \frac23 \right)^{6-i}& i \in \{ 0, \ldots, 6\}\\ 0 & \text{Otherwise}\end{cases}$$