Two which are for food rather than cash:
Let $f = t^{2d} + f_1 t^{2d-1} + f_2 t^{2d-2}+ \cdots f_d t^d + \cdots+ f_2 t^2 +f_1 t + 1$ be a palindromic polynomial, so the roots of $f$ are of the form $\lambda_1$, $\lambda_2$, ..., $\lambda_d$, $\lambda_1^{-1}$, $\lambda_2^{-1}$, ..., $\lambda_d^{-1}$. Set $r_k = \prod_{j=1}^d (\lambda_j^k-1)(\lambda_j^{-k} -1)$.
Conjecture: The coefficients of $f$ are uniquely determined by the values of $r_1$, $r_2$, ... $r_{d+1}$.
Motivation: When computing the zeta function of a genus $d$ curve over $\mathbb{F}_q$, the numerator is essentially of the form $f$. (More precisely, it is of the form $q^d f(t/\sqrt{q})$ for $f$ of this form.) Certain algorithms proceed by computing the $r_k$ and recovering the coefficients of $f$ from them. Note that you have to recover $d$ numbers, so you need at least $r_1$ through $r_d$; it is known that you need at least one more and the conjecture is that exactly one more is enough.
Reward: Sturmfels and Zworski will buy you dinner at Chez Panisse if you solve it.
Consider the following probabilistic model: We choose an infinite string, call it $\mathcal{A}$, of $A$'s, $C$'s, $G$'s and $T$'s. Each letter of the string is chosen independently at random, with probabilities $p_A$, $p_C$, $p_G$ and $p_T$.
Next, we copy the string $\mathcal{A}$ to form a new string $\mathcal{D}_1$. In the copying process, for each pair $(X, Y)$ of symbols in $\{ A, C, G, T \}$, there is some probability $p_1(X \to Y)$ that we will miscopy an $X$ as a $Y$. (The $16$ probabilities stay constant for the entire copying procedure.)
We repeat the procedure to form two more strings $\mathcal{D}_2$ and $\mathcal{D}_3$, using new probability matrices $p_2(X \to Y)$ and $p_3(X \to Y)$.
We then forget the ancestral string $\mathcal{A}$ and measure the $64$ frequencies with which the various possible joint distributions of $\{ A, C, G, T \}$ occur in the descendant strings $(\mathcal{D}_1, \mathcal{D}_2, \mathcal{D}_3)$.
Our procedure depended on $4+3 \times 16$ inputs: the $(p_A, p_C, p_G, p_T)$ and the $p_i(X \to Y)$. When you remember that probabilities should add up to $1$, there are actually only $39$ independent parameters here, and we are getting $63$ measurements (one less than $64$ because probabilities add up to $1$). So the set of possible outputs is a semialgeraic set of codimension $24$.
Conjecture: Elizabeth Allman has a conjectured list of generators for the Zariski closure of the set of possible measurements.
Motivation: Obviously, this is a model of evolution, and one which (some) biologists actually use. Allman and Rhodes have shown that, if you know generators for the ideal for this particular case, then they can tell you generators for every possible evolutionary history. (More descendants, known sequence of branching, etc.) There are techniques in statistics where knowing this Zariski closure would be helpful progress.
Reward: Elizabeth Allman will personally catch, clean, smoke and ship an Alaskan Salmon to you if you find the generators. (Or serve it to you fresh, if you visit her in Alaska.)
What you describe seems to me to be a normal mode of mathematical
progress, and I would urge you simply to carry on! Ride that train
as far as you can.
It often happens that someone's mathematical results can be
improved or generalized in various ways, and when this is
possible, it is mathematically desirable that the generalization
be undertaken well.
You may be worried that the value of this work is less than some
other totally original work. If the generalizations are routine,
then indeed that may be true. But from what you say, this doesn't
seem to be your case. Many generalizations are not routine and
such work is definitely worth doing.
Finally, let me caution you to guard yourself against a certain
mistake that sometimes undermines motivation for a young
researcher. Namely, it often happens in mathematical research that
we begin in a state of terrible confusion about a topic; as
research progresses, things only gradually become clarified. After
hard work, we finally begin to understand what is the actual
question we should be asking; and then, after fitful starts and
retreats, we gain some hard-won insight; until finally, after
laborious investigation, we have the answer.
But alas — it is at this point that the crippling illness
strikes. Namely, because the researcher now understands the
problem and its solution so well, he or she begins to lose sight
of the value of the very solution that was made. The mathematical advance begins to
seem trivial or obvious, perhaps without value. Having solved the problem so well,
the mathematician becomes a victim of his or her own success.
Because all is now so clear, it is harder to appreciate the value
of the achievement that was made.
Please guard against this disease! Do not denigrate your
achievement simply because it seems easy after you have made it.
In many mathematical realms, the actual achievement in research is
that certain issues and ideas become easy to understand. Please look
upon the ease of the answer at the end as part of the achievement itself, and
think back to the initial state of confusion at the beginning of
the work to realize the value of what you have done.
So please carry on and ride that railroad as far as it will take
you.
Best Answer
I find George Dantzig's story particularly impressive and inspiring.
The two problems that Dantzig solved were eventually published in: On the Non-Existence of Tests of "Student's" Hypothesis Having Power Functions Independent of σ (1940) and in On the Fundamental Lemma of Neyman and Pearson (1951).