First of all, there are several typos in your calculations (e.g. it should read $\int_0^t W_s^2 \,ds$ instead of $\int_0^t W_t^2 \, ds$). Your calculation goes wrong when you write
$$\mathbb{E} \left( \int_0^t W_s^3 \, dW_s \right) = \frac{\mathbb{E}(W_t^4)}{4} - \frac{3}{2} \int_0^t V(W_s) \, ds = \frac{\color{red}{3t^4}}{4} - \frac{3t^2}{4}.$$
(I don't get what you did in this last step - you want to calculate $\mathbb{E}(W_t^4)$; so why replace it with $3t^4$?)
Note that applying Itô's lemma is overkill: Since $W(_t)_{t \geq 0}$ is a Wiener process, we know that $W_t \sim N(0,t)$ (i.e. $W_t$ is Gaussian with mean $0$ and variance $t$) and the moments of Gaussian random variables can be calculated explicitly. However, if you really want to invoke Itô's formula, then it goes like that: By Itô's formula, we have
$$W_t^4 = 4 \int_0^t W_s^3 \, dW_s + 6 \int_0^t W_s^2 \, ds. \tag{1}$$
Since $(W_s^3)_{s \geq 0}$ is properly integrable, we know that the stochastic integral
$$M_t := \int_0^t W_s^3 \, dW_s$$
is a martingale and therefore $\mathbb{E}M_t = \mathbb{E}M_0=0$. Taking expectation in $(1)$ yields
$$\mathbb{E}(W_t^4) = 6 \int_0^t \mathbb{E}(W_s^2) \, ds$$
by Fubini's theorem. Finally, since $\mathbb{E}(W_s^2)=s$, we get $\mathbb{E}(W_t^4) = 3t^2$.
@Lorenzo already pointed out that your reasoning does not work and that the stochastic integral
$$M_t := \int_0^t f(s) \, dW_s$$
fails in general to be differentiable with respect to $t$ (e.g. if we choose $f:=1$).
In fact, it is possible to show the following stronger statement:
Proposition Let $f$ be a progressively measurable function satisfying the integrability condition
$$\mathbb{E} \left( \int_0^t f(s)^2 \, ds \right)<\infty \quad \text{for all $t \geq 0$}.$$
The following statements are equivalent:
- The process $M_t:= \int_0^t f(s) \, dW_s$ has a modification $(\tilde{M}_t)_{t \geq 0}$ whose sample paths $t \mapsto \tilde{M}_t$ are differentiable for all $\omega \in \Omega$.
- $f=0$ almost everywhere
Since $f=0$ implies, by Itô's isometry, $M_t=0$ almost surely, it follows easily that the second statement implies the first one. For the proof of the converse we can use the following result:
Lemma Let $(N_t)_{t \geq 0}$ be a martingale whose sample paths are continuous and of bounded variation. Then $N_t = N_0$ almost surely.
Since the process $(\tilde{M}_t)_{t \geq 0}$ has differentiable paths, they are, in particular, of bounded variation. Moreover, the integrability condition on $f$ ensures that the stochastic integral $M_t = \int_0^t f(s) \, dW_s$ is a martingale, and this implies that the modification $(\tilde{M}_t)_{t \geq 0}$ is a martingale. Applying the above lemma, we find that $\tilde{M}_t = 0$ almost surely. Thus,
$$\mathbb{E}(M_t^2) = \mathbb{E}(\tilde{M}_t^2) = 0.$$
On the other hand, Itô's isometry shows
$$\mathbb{E}(M_t^2) = \mathbb{E} \left( \int_0^t f(s)^2 \,ds \right).$$
Hence,
$$\mathbb{E} \left( \int_0^t f(s)^2 \,ds \right)=0.$$
As $t >0 $ is arbitrary, this proves $f=0$ almost everywhere.
Best Answer
Short answer: The stochastic integral, in Kunita-Watanabe sense, defines a martingale. Read P.195 of the article http://www-math.mit.edu/~dws/ito/ito7.pdf.
Long answer: We need to ask how is the stochastic integral defined. To answer such question, we need a lot of preparation (e.g. a chapter in a textbook). You may consult the famous book "Probabilities and Potential" by Claude Dellacherie and Paul-Andre Meyer.
Although it is true that the Riemann sum converges to the stochastic integral in probability, this fact is very hard to prove and there is a lot of technical difficulty. For example, imagine that we have not defined the stochastic integral yet, then: How can we tell the Riemann sums converge to some random variable in probability? In what sense? (It is not a sequence of random variables, but involving choices of $t_i$)