I was trying to determine which of the Gauss-Markov assumptions allow us to see that $b_1$ is an unbiased estimator of $\beta_1$. I have a feeling it's that $X_{i}$ is not random, but is there anything else that I'm missing?
Solved – Gauss-Markov assumptions
regression
Related Solutions
Unbiased estimates are typical in introductory statistics courses because they are: 1) classic, 2) easy to analyze mathematically. The Cramer-Rao lower bound is one of the main tools for 2). Away from unbiased estimates there is possible improvement. The bias-variance trade off is an important concept in statistics for understanding how biased estimates can be better than unbiased estimates.
Unfortunately, biased estimators are typically harder to analyze. In regression, much of the research in the past 40 years has been about biased estimation. This began with ridge regression (Hoerl and Kennard, 1970). See Frank and Friedman (1996) and Burr and Fry (2005) for some review and insights.
The bias-variance tradeoff becomes more important in high-dimensions, where the number of variables is large. Charles Stein surprised everyone when he proved that in the Normal means problem the sample mean is no longer admissible if $p \geq 3$ (see Stein, 1956). The James-Stein estimator (James and Stein 1961) was the first example of an estimator that dominates the sample mean. However, it is also inadmissible.
An important part of the bias-variance problem is determining how bias should be traded off. There is no single “best” estimator. Sparsity has been an important part of research in the past decade. See Hesterberg et al. (2008) for a partial review.
Most of the estimators referenced above are non-linear in $Y$. Even ridge regression is non-linear once the data is used to determine the ridge parameter.
I am not sure if I understood you question correctly, but if you are looking to prove that the OLS for $\hat{\beta}$ is BLUE (best linear unbiased estimator) you have to prove the following two things: First that $\hat{\beta}$ is unbiased and second that $Var(\hat{\beta})$ is the smallest among all linear unbiased estimators.
Proof that OLS estimator is unbiased can be found here http://economictheoryblog.com/2015/02/19/ols_estimator/
and proof that $Var(\hat{\beta})$ is the smallest among all linear unbiased estimators can be found here http://economictheoryblog.com/2015/02/26/markov_theorem/
Best Answer
The LS-Estimator is:$$b=\beta + (X'X)^{-1}X'e$$ The estimator is unbiased if $(X'X)^{-1}X'e$ converges to zero, and this is the case, if the designmatrix $X$ is not correlated with the error $e$.
So, the necessary assumption is: $$E[X_{t,k}*e_t]=0$$