"What am I doing wrong?" Nothing here. Your approach is correct: $\hat{\theta}_1$ is an unbiased estimator, for the reason you give (basically, it has to do with linearity of expectation). When $X_1,\dots,X_n$ are identically distributed and have finite expectation $\mu$,
$$
\mathbb{E}\!\left[\frac{1}{n}\sum_{k=1}^n X_k\right] = \frac{1}{n}\sum_{k=1}^n \mathbb{E}[X_k] = \frac{1}{n}\sum_{k=1}^n \mu = \mu
$$
For the same reason, $\hat{\theta}_2$ is unbiased as well:
$$
\mathbb{E}\!\left[\hat{\theta}_2\right] = \frac{2\mathbb{E}[X_1] - \mathbb{E}[X_6]+\mathbb{E}[X_4]}{2} = \frac{2\mu - \mu+\mu}{2} = \mu\ .
$$
Note that while both are unbiased estimators, the first one may be better, as it will have smaller variance... i.e., while both expectations are the desired value $\mu$, the second will be much less accurate (more subject to random fluctuations).
-- Attempt at addressing the larger question: assume the goal is to estimate a quantity $\theta$ of the distribution, assuming you have access to identically distributed (and, most of the time, assumed to be independent) r.v.s/samples from it, $X_1,X_2,\dots$. You basically want to come up with a nice function $f$ or sequence of functions $(f_n)$ such that either
- $f(X_1,\dots, X_k)$ is with high probability close to $\theta$, or
- $f_n(X_1,\dots,X_n)$ converges (when $n$ grows) towards $\theta$, in some specific sense.
Here, requiring an unbiased estimate you also want $\mathbb{E}[f_n(X_1,\dots, X_n)] = \theta$ (for all $n$), when the probability is over the realizations of $X_1,X_2,\dots$. Basically, your estimator is also a random variable, which depends on the $X_i$'s (being a function of them); and you can as such compute its expectation, variance, etc.
Now, for the case of estimating the expected value, things can seem a bit circular, since your estimator is basically a linear function of the $X_i$'s, and computing its expected value amounts to compute their expected value, which is what you are trying to estimate. But it does make sense nonetheless (it's not because it looks to simple to be true that it's not true); for instance, one could come up with other estimators for the expected value, say for instance
$
Y = \ln \frac{e^{X_1}+e^{X_2}+\dots+e^{X_n}}{n}.
$
Computing the expected value of this new random variable will not be as straightforward; yet eventually it will also boil down to using the fact that the expectation of each $X_i$ is $\mu$ (as well, maybe, as other assumptions like independence of the $X_i$'s).
Another answer has already pointed out why your intuition is flawed, so let us do some computations.
If $X$ is uniform, then:
$$
P(X_{max}<x)=P(X_i<x,\forall i)=\prod_i P(X_i<x)=
\begin{cases}
1 & \text{if } x\ge \theta \\
\left(\frac{x}{\theta}\right)^n & \text{if } 0\le x\le \theta \\
0 & \text{if } x\le 0
\end{cases}
$$
so the density function of $X_{max}$ is:
$$
f_{max}(x;\theta)=\begin{cases}
\frac{n}{\theta^n}x^{n-1} & \text{if } 0\le x\le \theta \\
0 & \text{otherwise}
\end{cases}
$$
Then we can compute the average of $X_{max}$:
$$
E(X_{max})=\int_0^\theta x \frac{n}{\theta^n}x^{n-1} dx =\frac{n}{n+1} \theta
$$
so $X_{max}$ is biased whereas $\frac{n+1}{n}X_{max}$ is an unbiased estimator of $\theta$.
Best Answer
Often one uses a lower-case $x$ for the argument to the density function or to the c.d.f., and a capital $X$ to refer to the random variable whose density it is. But you've said your capital $X$ is the sample median, and that suggests a sample size more than $1$. So I am uncertain of the meaning of the question.
If we assume you mean simple a sample of one observation, then one can find an expected value by integrating: $$ \mathbb E X = \int_{-4}^4 xf(x)\,dx = \int_{-4}^4 x\frac{1+\theta x}{8}\,dx $$ $$ = \frac18 \left(\int_{-4}^4 \frac x8\,dx + \theta \int_{-4}^4 x^2\,dx \right) $$
The first integral is readliy seen to be $0$ (you're integrating an odd function over an interval symmetric about $0$). The second one comes to $16\theta/3$.
So $\frac{3}{16}\mathbb E X = \theta$. The expression $\frac3{16}X$ is "observable" i.e. does not depend on $\theta$, and its expected value remains equal to $\theta$ if $\theta$ changes.