Conditional probabilities do not give a unique function on the sample space. Since conditional expectations are only defined up to a measure zero set and one has to make an uncountable number of selections, the essential problem is whether one can "glue" them together in coherent way, so that you can actually calculate conditional probabilities by integrating the function. There are several notions of regular conditional probabilities and this paper by Faden gives necessary and sufficient conditions for some of them. For the particular version you mentioned, little is known about necessary conditions. The strongest results on the existence of regular conditional probabilities can be found in this paper by Pachl, but he only requires them to be measurable with respect to the completion of the measure. The machinery he uses is rather sophisticated, his method is based on using a lifting that he then shows (under some condition, compactness) to give a countably additive probability.
The most extensive resource on conditional probabilities is probably the book Conditional Measures and Applications by M.M. Rao. The book is not recommended for its readability. Your question is addressed in chapter 3 in a comprehensive manner.
The sequence of comments above was getting a bit long, so I'll convert it into an answer.
As noted in @William's answer, making the domain's $\sigma$-algebra smaller while keeping the co-domain's $\sigma$-algebra fixed makes it harder for a function to be measurable.
As an example, if we take any non-constant random variable $X$ on a $\sigma$-algebra $\mathcal{F}$, and then set $\mathcal{G} = \{\emptyset, \Omega\} \subset \mathcal{F}$, then we note that $X$ cannot be measurable, as only constants are $\mathcal{G}$-measurable.
As an example for computing a conditional expectation, set $\Omega = [0,1]$, $\mathcal{F} = \mathcal{B}[0,1]$, and $P = \lambda$ (Lebesgue measure). We now set $\mathcal{G} = \{\emptyset , [0,1], [0,0.5), [0.5, 1]\}$ and relativise $\lambda$ to $\mathcal{G}$ (i.e. we assign Lebesgue measure to the sets in $\mathcal{G}$). Consider the random variable $X : \Omega \to \mathbb{R}$ given by $X(\omega ) = \omega$. This is $\mathcal{F}$-measurable, but it is not $\mathcal{G}$-measurable.
Notice that $\int_{[0,0.5)} X dP = \frac{1}{8}$ and $\int_{[0.5, 1]} X dP = \frac{3}{8}$. We now wish to find a $\mathcal{G}$-measurable function that integrates to these same values. For a function to be $\mathcal{G}$-measurable, it must be constant on $[0,0.5)$ and $[0.5,1]$. Thus, we may set $$Y(\omega) = \begin{cases}
\frac{1}{4} & 0 \leq \omega < 0.5 \\
\frac{3}{4} & 0.5 \leq \omega \leq 1
\end{cases}$$
Notice that $Y$ is $\mathcal{G}$-measurable and $\int_{[0,0.5)} Y dP = \frac{1}{8}$ and $\int_{[0.5, 1]} Y dP = \frac{3}{8}$. Thus $E(X | \mathcal{G}) = Y$.
Best Answer
Your proof is not valid. For a correct proof note that when $X$ is a simple function measurable w.r.t. $\mathcal G$ the result follows by taking linear combinations in the definition of condition al expectation. Now you can go through the usual process (of taking limits of simple functions and considering positive and negative parts) to prove it for any $X$ which is measurable w.r.t. $\mathcal G$ such that $E|XY| <\infty$ when $Y\geq 0$. Finally you can drop the assumption $Y \geq 0$.