You should first accept the fact that it's an elliptic integral, and therefore doesn't have an elementary expression without elliptic functions. If you had a numerical library with elliptic functions, then great. Otherwise, you need to either implement elliptic functions yourself, or implement numerical integration of your integral.
I recommend numerical integration, just because in context it is conceptually simple and reliable. Your integrand has a fairly tame form: It can't blow up, the integrand is continuous, and the integrand is also real analytic unless it touches zero. In this situation, Gaussian integration has excellent properties. I don't feel like doing a precise calculation, but I would expect that for any choice of the coefficients, Gaussian quadrature with just 5 evaluation points already has to be fairly close to the exact answer for any choices of the coefficients.
The above is part of an answer, but not a complete answer you really want 64 bits of accuracy. Assuming that the integrand is real analytic, Gaussian quadrature or Clenshaw-Curtis will converge exponentially. It seems reasonable enough to use Clenshaw-Curtis, which lets you recycle evaluation points and has a predictable formula for the numerical integration weights, with more and more points until the answer looks accurate.
The only problem is in the annoying case in which the integrand touches zero, or comes close to touching zero, which can be interpreted geometrically as a point on the spline with nearly zero velocity. (Typically it looks like a cusp.) Then the integrand is NOT real analytic and these numerical methods do not converge exponentially. Or, in the near-zero case, the integrand is analytic but the exponential rate of convergence is slow. I'm sure that there are tricks available that will handle this case properly: You could cut out an interval near the bad point and do something different, or you could subtract off a known integral to tame the bad point. But at the moment I do not have specific advice for an algorithm that is both reasonable fast and reasonably convenient. Clenshaw-Curtis is convenient and usually very fast for this problem, but not all that fast in bad cases if you push it to 64 bits of precision.
Also, these methods can be thought of as a more sophisticated version of chordal approximation. Chordal approximation faces the same issue, except worse: It never converges at an exponential rate for a non-trivial cubic spline. If you want 64 bits, you might need a million chords.
Meanwhile, the GNU Scientific Library does have elliptic function routines. If you have elliptic functions, then again, your integral is not all that simple, but it is elementary. I don't know whether GSL or equivalent is available for your software problem. If it is, then an elliptic function formula is probably by far the fastest (for the computer, not necessarily for you).
In a recent comment, bpowah says "All I wanted to know is whether or not it was faster to compute the given integral numerically or exactly." Here is a discussion. Computing an integral, or any transcendental quantity, "exactly" is an illusion. Transcendental functions are themselves computed by approximate numerical procedures of various kinds: Newton's method, power series, arithmetic-geometric means, etc. There is an art to coding these functions properly. A competitive implementation of a function even as simple as sin(x) is already non-trivial.
Even so, I'm sure that it's faster in principle to evaluate the integral in question in closed form using elliptic functions. It could be hard work to do this right, because the first step is to factor the quartic polynomial under the square root. That already requires either the quartic formula (unfortunately not listed in the GNU Scientific Library even though it has the cubic) or a general polynomial solver (which is in GSL but has unclear performance and reliability). The solution also requires elliptic functions with complex arguments, even though the answer is real. It could require careful handling of branch cuts of the elliptic functions, which are multivalued. With all of these caveats, it doesn't seem worth it to work out an explicit formula. The main fact is that there is one, if you have elliptic functions available but not otherwise.
The merit of a numerical integration algorithm such as Gaussian quadrature (or Clenshaw-Curtis, Gauss-Kronrod, etc.) is that it is vastly simpler to code. It won't be as fast, but it should be quite fast if it is coded properly. The only problem is that the integrand becomes singular if it reaches 0, and nearly singular if it is near 0. This makes convergence much slower, although still not as slow as approximation with chords. With special handling of the near-singular points, it should still be fine for high-performance numerical computation. For instance, a polished strategy for numerical integration might well be faster than a clumsy evaluation of the relevant elliptic functions.
This is a very interesting (yet rather vague) question. Most answers were in the direction of mathematical logic but I am not sure this is the only (or even the most appropriate) way to think about it. The notion of coincidence is by itself very complicated. (See http://en.wikipedia.org/wiki/Coincidence ). One way to put it on rigurous grounds is using probabilistic/statistical framework. Indeed, as Timothy mentioned it is sometimes possible to give a probabilistic heuristic in support of some mathematical statement. But its is notorious statistical problem to try to determine aposteriori if some events represent a coincidence.
I am not sure that (as the OP assumes) if a statement is "true by accident" it implies that it can never be proved. Also I am not sure (as implied by most answers) that "can never be proved" should be interpreted as "does not follow from the axioms". It can also refers to situations where the statement admits a proof, but the proof is also "accidental" as the original statement is, so it is unlikely to be found in the systematic way mathematics is developed.
In a sense (as mentioned in quid's answer), the notion of "true by accident" is related to mathematics psychology. It is more related to the way we precieve mathematical truths than to some objective facts about them.
Regarding the reconstruction conjecture. Note that we can ask if the conjecture is true for graphs with at most million vertices. Here, if true it is certainly provable. So the logic issues disappear but the main issue of the question remains. (We can replace the logic distinctions by computational complexity distinctions. But still I am not sure this will cpature the essence of the question.) There is a weaker form of the conjecture called the edge reconstruction conjecture (same problem but you delete edges rather than vertices) where much is known. There is a very conceptual proof that every graph with n vertices and more than nlogn edges is edge-reconstructible. So this gives some support to the feeling that maybe vertex reconstruction can also be dealt with.
Finally I am not aware of a heuristic argument that "there would need to be a massive coincidence for two non-isomorphic graphs to have the same 'deck'" as the OP suggested. (Coming up with a convincing such heuristic would be intereting.) It is known that various graph invariants must have the same value on such two graphs.
Best Answer
Part of what makes this question subtle is that what's intuitive depends on your background knowledge. In particular, the question of what counts as an "explicit tiny quantity" is hard to pin down. For example, Bailey, Borwein, and Borwein pointed out the example $$ \left(\frac{1}{10^5} \sum_{n=-\infty}^\infty e^{-n^2/10^{10}}\right)^2 \approx \pi, $$ which holds to over 42 billion digits. If you know something about Poisson summation, it's pretty obvious that this is a fraud: Poisson summation converts this series into an incredibly rapidly converging series with first term $\pi$, after which everything else is tiny. Even if you don't know about Poisson summation, the $10^5$ and $10^{10}$ should raise some suspicions. However, it's a pretty amazing example if you haven't seen this sort of thing before.
As for the third question, the truth is more dramatic than that. There are finite identities that are independent of ZFC, or any reasonable axiom system, so you don't even need anything fancy like infinite sums or integrals. This follows from a theorem of Richardson (D. Richardson, Some Undecidable Problems Involving Elementary Functions of a Real Variable, Journal of Symbolic Logic 33 (1968), 514-520, http://www.jstor.org/stable/2271358).
Specifically, consider the expressions obtainable by addition, subtraction, multiplication, and composition from the initial functions $\log 2$, $\pi$, $e^x$, $\sin x$, and $|x|$. Richardson proved that there is no algorithm to decide whether such an expression defines the zero function. Now, if the function is not identically zero then it can be proved not to be zero (find a point at which it doesn't vanish and compute it numerically with enough accuracy to verify that it is nonzero). If all the identically zero cases could be proved in ZFC too, then that would give a decision procedure: do a brute force search through all possible proofs until you find one that resolves the question. Thus, there exists an expression that actually is identically zero but where there is no proof in ZFC. In fact, if you go through Richardson's proof you can explicitly construct such an expression, although it will be terrifically complicated. (To be precise, you'll be able to prove in ZFC that if ZFC is consistent, then this identity is unprovable, but by Gödel's second incompleteness theorem you won't be able to prove that ZFC is consistent in ZFC.)
Being independent of ZFC doesn't mean these identities are true for no reason, and in fact Richardson's proof gives a reason. However, his paper shows that there is no systematic way to test identities. You can indeed end up stuck in a situation where a proposed identity really looks true numerically but you just can't find a proof.