First, we discretize our domain into points $t_n=nh.$ Then, Euler's method is given by $$V^{n+1}=V^n+hf(V^n),$$ corresponding to the equation $$v'=f(v).$$ The sequence $(V^n)$ should approximate the solution in the sense that $V^n\approx v(t_n)$. In your case, we find that $f(v)=g-pv,$ and since we want to compute up to $t=1$ with $h=0.25,$ we want to compute $4$ iterations (this will correspond to time-stepping up to one second). Since $V^0$ is just the initial condition, we find\begin{align*}
V^0&=0\\
V^1&=V^0+hf(V^0)=h(g-pV^0)\\
V^2&=V^1+hf(V^1)=V^1+h(g-pV^1)\\
V^3&=V^2+hf(V^2)=V^2+h(g-pV^2)\\
V^4&=V^3+hf(V^3)=V^3+h(g-pV^3)
\end{align*} You can calculate these directly. In particular, the final one gives you $$V^4\approx v(t_4)=v(4h)=v(1).$$
Let's look at the half axis $y=0$, $t>0$. There the right side is $f(t,0)=t>0$ so that no solution may cross from the upper to the lower quadrant.
Close to zero one gets $y(t)=\frac12t^2+O(t^9)$ so that the solution will indeed enter the upper quadrant from the start. Furthermore, from $y'(t)\le t$ we get $y(t)\le\frac12t^2$, so that we also know an upper bound for the solution. We can restrict the region for the estimates of the Euler method to $(t,x)\in[0,1]\times[0,1]$, or, if you want to be cautious, $(t,x)\in[0,1]\times[-1,1]$.
On that region, $$|f(t,y)|\le 1=M_1$$ is a bound for the first derivative of any solution, and $$|f_t+f_yf|=|1-4y^3(t-y^4)|\le 5=M_2$$ a bound for the second derivative. Also, for the $y$-Lipschitz constant one gets similarly $$|f_y|=|-4y^3|\le 4=L. $$
Now insert into the error estimate
$$
e(t,h)\le \frac{M_2}{2L}(e^{Lt}-1)h=\frac{5}{8}(e^{4t}-1)h.
$$
Now let's see how that bound stands up to the actual error of the numerical method
def EulerIntegrate(f,y0,t):
y = np.asarray(len(t)*[y0], dtype=float);
for k in range(len(t)-1):
y[k+1] = y[k]+(t[k+1]-t[k])*f(t[k],y[k])
return y
def f(t,y): return t-y**4
t = np.linspace(0,1,64*15+1)
N = [30*2**k for k in range(4)]
yEuler = [ EulerIntegrate(f,0,t[::n]) for n in N];
yGold = odeint(f,0,t,tfirst=True, atol=1e-12, rtol=1e-13)[:,0]
fig,ax = plt.subplots(1,2,figsize=(12,5))
for k in range(len(N)):
h = t[N[k]]-t[0];
ax[0].plot(t[::N[k]], yEuler[k],'-o',label="h=%.4f"%(1./N[k]))
ax[1].plot(t[::N[k]], (yEuler[k]-yGold[::N[k]])/h,'-o',label="h=%.4f"%(1./N[k]))
gives the plots
where the second plot shows the error profile, the estimated leading coefficient $c(t)$ of the global error $e(t,h)=c(t)h+O(h^2)$ over time. The rapidly falling gray line is the error bound, safely below the actual error. The left plot of the actual solutions against the backdrop of a much more precise numerical solution clearly shows the linear convergence of the Euler method.
Best Answer
Yes, the process of computation stays the same when $h<0$; you get a solution to the left of the initial point. An adjustment in error bound formulas may be needed, since they are written under the assumption $h>0$. So, replace $\text{error}\le Ch$ with $\text{error}\le C|h|$.
As hardmath pointed out in a comment, using $h<0$ in Euler's method is different from using Backward (aka implicit) Euler method.