Even though the $f_i$ are not polynomials I'll give the answer in that case because it is very nice and it seems like there is some interest. I have to stress in advance though that the answer exploits the resulting algebraic structure in a fundamental way, and so is unlikely to extend to the case when the $f_i$ are not polynomials.
First of all, a semidefinite program (SDP) is an optimization problem with matrix variables, linear objective, and positive semidefiniteness constraints on symmetric (real) matrices in addition to the standard linear (in)equalities allowed in linear programming. They are a generalization of linear programs (LP) and are vastly more expressive. LPs are the case when the matrices are constrained to be diagonal. SDP can also be viewed as a noncommutative version of LP.
The relationship with semi-infinite programming suggested by Gilead is I think the fact that one can view the constraint "A is positive semidefinite" as $x^TAx\geq 0$ for all $x$, which is an infinite number of constraints. On the other hand, one can view any convex constraint in this way, because any closed convex set can be described by (infinitely many) linear inequalities.
Theoretically SDPs are important both because many problems can be written as SDPs and because they can be solved using interior point methods in polynomial time, almost as efficiently as LPs in theory. In practice, the technology is much newer than that for LPs so one cannot solve SDPs which are nearly as big using off-the-shelf software, but those days seem to be getting closer.
To see how to turn your problem into an SDP if the $f_i$ were polynomials, let $x_i$ be your decision variables. Note that $f_i(1)$ is just a constant, so your first constraint is just a linear equation on the $x_i$ and that is no problem in an SDP. For the others, we need to do a little work.
Let $H$ denote the operator which sends a symmetric matrix to its sums along antidiagonals, so
$H: \begin{bmatrix}a & b \\\\ b & c \end{bmatrix}\mapsto\begin{bmatrix}a & 2b & c\end{bmatrix}$, and so on for bigger matrices. If we identify polynomials with their sequences of coefficients, then $p = q^2$ as polynomials if and only if $p = H(qq^T)$ as vectors. Therefore $p$ is a sum of squares (SOS) if and only if $p = H(Q)$ for a positive semidefinite $Q$ (any such $Q$ is the sum of matrices of the form $qq^T$). Now, if a polynomial $p$ is SOS then it is automatically nonnegative everywhere. Conversely, one can show than any univariate nonnegative polynomial is a sum of squares. This gives us an exact characterization of nonnegative polynomials in terms of positive semidefinite matrices. Thinking of the matrix $Q$ as a new decision variable, we can write the constraint "$p$ is nonnegative" in a semidefinite program; the solver will find a $Q$ which certifies this.
Similarly, a polynomial $p$ is nonnegative on an interval $[w,\infty)$ if and only if it is of the form $p(x) = SOS_1(x) + (x-w)\cdot SOS_2(x)$ for some SOS polynomials $SOS_i$. Again, one direction is obvious and the other requires a little effort (write $p$ in factored form and group the factors cleverly). Therefore we can also write nonnegativity on an interval in terms of positive semidefinite matrices, and hence use it as a constraint in an SDP.
Now note that when we use $H$ to define the constraint for a polynomial $p$ to be SOS, we are writing linear equality constraints between the coefficients of $p$ and some linear functionals of the matrix inside $H$. Similarly for when we write the constraint that $p$ is nonnegative on an interval: it is a linear equality between some decision variables, plus the constraint that certain matrices (the ones defining the SOS polynomials) are positive semidefinite.
Until now we've been thinking of $p$ as a constant polynomial. But because arbitrary linear equalities between decision variables are allowed in an SDP, we can just as easily write the constraint "$\sum_{i=1}^N x_i f_i(y)$ is nonnegative for $y\in[2,\infty)$" using this method, because our polynomial in question has coefficients which are linear in the decision variables.
Putting this all together gives a semidefinite program which would express exactly what you want in the case that the $f_i$ are polynomials. One could then either find a feasible $x_i$, prove that none exists, or optimize a linear functional of the $x_i$ all in polynomial time. Unfortunately because of the way we have used the algebraic structure of the problem, this is unlikely to extend to non-polynomial $f_i$.
Finally, I should note that if you're interested in this sort of thing I highly recommend checking out my advisor Pablo Parrilo's course notes on MIT OpenCourseWare. You can find the link on his website.
Where do the different objective functions come from? Perhaps if we knew more about your problem we could make a helpful suggestion about how to approach it.
If the objective functions are totally unreleated to each other, then other than having a feasible solution to the LP, solving one of your LP's isn't very useful for solving any of the other LP's.
The standard approach that you should be aware of is "warm starting" the simplex method with the optimal basis for problem i as the starting point for the solution of problem i+1. Note that although warm starting the simplex method can be very helpful for small changes in the objective function, it can actually slow things down if there are large changes to the objective function. Most modern solvers use a heuristic "crash basis" to start the simplex method and these usually work very well.
For interior point methods for linear programming there are no well developed methods for warm starting with a new objective function.
Best Answer
If i correctly understand you problem, it can't be transformed to LP in general. The reason is that LP is convex, and your objective function is not convex(even discontinues). So i suugest you to look at less general case.