Even though the $f_i$ are not polynomials I'll give the answer in that case because it is very nice and it seems like there is some interest. I have to stress in advance though that the answer exploits the resulting algebraic structure in a fundamental way, and so is unlikely to extend to the case when the $f_i$ are not polynomials.
First of all, a semidefinite program (SDP) is an optimization problem with matrix variables, linear objective, and positive semidefiniteness constraints on symmetric (real) matrices in addition to the standard linear (in)equalities allowed in linear programming. They are a generalization of linear programs (LP) and are vastly more expressive. LPs are the case when the matrices are constrained to be diagonal. SDP can also be viewed as a noncommutative version of LP.
The relationship with semi-infinite programming suggested by Gilead is I think the fact that one can view the constraint "A is positive semidefinite" as $x^TAx\geq 0$ for all $x$, which is an infinite number of constraints. On the other hand, one can view any convex constraint in this way, because any closed convex set can be described by (infinitely many) linear inequalities.
Theoretically SDPs are important both because many problems can be written as SDPs and because they can be solved using interior point methods in polynomial time, almost as efficiently as LPs in theory. In practice, the technology is much newer than that for LPs so one cannot solve SDPs which are nearly as big using off-the-shelf software, but those days seem to be getting closer.
To see how to turn your problem into an SDP if the $f_i$ were polynomials, let $x_i$ be your decision variables. Note that $f_i(1)$ is just a constant, so your first constraint is just a linear equation on the $x_i$ and that is no problem in an SDP. For the others, we need to do a little work.
Let $H$ denote the operator which sends a symmetric matrix to its sums along antidiagonals, so
$H: \begin{bmatrix}a & b \\\\ b & c \end{bmatrix}\mapsto\begin{bmatrix}a & 2b & c\end{bmatrix}$, and so on for bigger matrices. If we identify polynomials with their sequences of coefficients, then $p = q^2$ as polynomials if and only if $p = H(qq^T)$ as vectors. Therefore $p$ is a sum of squares (SOS) if and only if $p = H(Q)$ for a positive semidefinite $Q$ (any such $Q$ is the sum of matrices of the form $qq^T$). Now, if a polynomial $p$ is SOS then it is automatically nonnegative everywhere. Conversely, one can show than any univariate nonnegative polynomial is a sum of squares. This gives us an exact characterization of nonnegative polynomials in terms of positive semidefinite matrices. Thinking of the matrix $Q$ as a new decision variable, we can write the constraint "$p$ is nonnegative" in a semidefinite program; the solver will find a $Q$ which certifies this.
Similarly, a polynomial $p$ is nonnegative on an interval $[w,\infty)$ if and only if it is of the form $p(x) = SOS_1(x) + (x-w)\cdot SOS_2(x)$ for some SOS polynomials $SOS_i$. Again, one direction is obvious and the other requires a little effort (write $p$ in factored form and group the factors cleverly). Therefore we can also write nonnegativity on an interval in terms of positive semidefinite matrices, and hence use it as a constraint in an SDP.
Now note that when we use $H$ to define the constraint for a polynomial $p$ to be SOS, we are writing linear equality constraints between the coefficients of $p$ and some linear functionals of the matrix inside $H$. Similarly for when we write the constraint that $p$ is nonnegative on an interval: it is a linear equality between some decision variables, plus the constraint that certain matrices (the ones defining the SOS polynomials) are positive semidefinite.
Until now we've been thinking of $p$ as a constant polynomial. But because arbitrary linear equalities between decision variables are allowed in an SDP, we can just as easily write the constraint "$\sum_{i=1}^N x_i f_i(y)$ is nonnegative for $y\in[2,\infty)$" using this method, because our polynomial in question has coefficients which are linear in the decision variables.
Putting this all together gives a semidefinite program which would express exactly what you want in the case that the $f_i$ are polynomials. One could then either find a feasible $x_i$, prove that none exists, or optimize a linear functional of the $x_i$ all in polynomial time. Unfortunately because of the way we have used the algebraic structure of the problem, this is unlikely to extend to non-polynomial $f_i$.
Finally, I should note that if you're interested in this sort of thing I highly recommend checking out my advisor Pablo Parrilo's course notes on MIT OpenCourseWare. You can find the link on his website.
Best Answer
Your additional constraints make the feasible region for your problem nonconvex, and thus it cannot be represented as a linear programming problem.
If you can obtain an upper bound $M_{i}$ on the maximum value of $x_{i}$, then there is a standard approach to these problems in which the problem is formulated as a 0-1 mixed integer linear programming problem.
First, we introduce 0-1 variables $y_{i}$, and add the constraints
$x_{i} \geq k_{i}y_{i}$
If $y_{i}$ takes on a 1 value, then $x_{i}$ is forced to be greater than or equal to $k_{i}$. If $y_{i}$ is 0, then this constraint is vacuous.
Next, we add the constraints
$x_{i} \leq M_{i}y_{i}$
If $y_{i}$ is 0, this forces $x_{i}=0$. If $y_{i}=1$, then this constraint does nothing.
This combination of constraints means that either $x_{i}=0$ and $y_{i}=0$ or $x_{i}\geq k_{i}$ and $y_{i}=1$.
There are many approaches to solving the resulting 0-1 mixed integer linear programming including branch and bound methods and cutting plane algorithms. In practice, the most powerful methods (implemented in closed source commercial codes such as IBM's CPLEX as well as a number of open source noncommercial software packages) combine these two general approaches into a "branch and cut" approach.
It is also possible to directly implement these "semi-continuous variables" within a branch and cut algorithm, and this approach does not require an upper bound $M_{i}$. This feature is available for example in IBM's CPLEX package.