I think you may be misunderstanding how the simplex method is used to solve quadratic programs. It doesn't operate on the vertices of the feasible region of the original quadratic program; it operates on the vertices of the feasible region of the KKT conditions for the original quadratic program. This is because 1) the KKT conditions are sufficient as well as necessary for convex problems, and 2) for a quadratic program the KKT conditions are linear (well, except for complementary slackness, but that can be handled easily). Thus solving the original convex quadratic program reduces to solving its linear KKT conditions, and the simplex method is used to handle the latter. (See, for example, here.)
All of this means that the simplex method will output a vertex of the feasible region of the KKT conditions rather than necessarily a vertex of the feasible region of the original quadratic program.
However, if the quadratic program has the optimal solution at a vertex it means that if you use any of the algorithms for linear programming the algorithm will output that vertex as the optimal solution. (Well, assuming there aren't multiple optimal solutions.) So if you use one of the polynomial-time algorithms, that algorithm will output the vertex you want in polynomial time. For example, the ellipsoid method will work just fine, as in this recent answer on Math Overflow.
Added: If the quadratic programming problem has multiple optimal solutions, and you use, say, a polynomial-time interior-point algorithm to find one of them, it won't necessarily find one of the vertex optimal solutions. However, there is a polynomial-time algorithm due to Berkelaar, Jansen, Roos, and Terlaky for finding an optimal basis (which will give you an optimal vertex) from an optimal interior solution to a quadratic program. So all of that together will get you to an optimal vertex in polynomial time.
Let $x$ and $y$ be two arbitrary feasible solutions, and let $\alpha \in [0,1]$. Now show that the solution $\alpha x + (1-\alpha)y$ is feasible.
Best Answer
The fastest method is to go over all points in $F$ and compute the objective values, which takes $\mathcal{O}(nd)$ steps, with $n$ the cardinality of $F$.
Suppose you are told that the optimum is $x^*$ or has an objective value close to $c^Tx^*$, you still need to check every coordinate of every other point, becuase for any other $x$, $c_j x_j$ can be arbitrarily large.