Fix $n$, and let $V(x)$ be the polynomial from the question, for which we want to show that it is
positive on the open interval $(0,1)$. Set
\begin{align*}
a(n) &= 1024n^2 - 4096/3n + 320\\
b(n) &= 12288n^3 - 14336n^2 + 4864/3n + 256\\
W(x) &= x^4\left(V(x)-(n+1)^2(2n+1)^2x^{24n-1}(1-x)^4(a(n)x + b(n)(1-x))\right).
\end{align*}
For all $n\ge12$ we claim that $(1-x)^6$ divides $W(x)$, and that all the coefficients of $W(x)/(1-x)^6$ are nonnegative. In particular, $W(x)$ is positive for $x>0$. As $a(n)$ and $b(n)$ is positive for all $n\ge1$, we see that $V(x)$ is positive for all $0<x<1$, provided that $n\ge12$. The cases $2\le n\le 11$ are easily checked directly, for instance by noting that the coefficients of $(1+x)^{\deg V}V(1/(1+x))$ are positive.
I'm not sure about the easiest kind to prove the assertion about the coefficients of $W(x)/(1-x)^6$. We compute them via
\begin{equation}
\frac{W(x)}{(1-x)^6}=W(x)\sum_{k\ge0}\binom{k+5}{5}x^k.
\end{equation}
All what follows is assisted/confirmed by the Sage code below.
The exponents of $W(x)$ have the form $2ni+j$ for $0\le i\le 12$ and $0\le j\le 10$. Let $a_{i,j}$ be the corresponding coefficient. It is a polynomial in $n$.
We have
\begin{equation}
\frac{W(x)}{(1-x)^6} = \sum_{i,j}a_{i,j}(n)x^{2ni+j}\sum_k\binom{k+5}{5}x^k.
\end{equation}
With $r\ge0$ and $0\le s\le 2n-1$, the coefficient of $x^{2nr+s}$ in $W(x)\frac{1}{(1-x)^6}$
is the sum of all $a_{i,j}(n)\cdot\binom{nr+s-ni-j+y}{5}$ for all pairs $(i,j)$ where $i<r$ and $0\le j\le 10$, or $i=r$ and $j\le s$.
We distinguish the cases $0\le s\le 9$ from the cases $s\ge10$. In these latter cases, we write $s=10+y$. Recall that $s\le 2n-1$, so $10+y=2n-1-o$ for a nonnegative $o$. Upon replacing $n$ with $(11+y+1)/2$, we get polynomials in $n$ and $o$.
In the former cases, the coefficients are polynomials in $n$, which have positive coefficients upon replacing $n$ with $n+12$. So these coefficients are positive for all $n\ge12$.
In the latter cases it turns out that in all but one case the coefficients are positive. So for nonnegative $y$ and $o$, the values are nonnegative.
For the single exception we see that if we multiply it with a suitable polynomial, the resulting coefficients are positive.
Remark (answering Jason's question): It is a known fact that a polynomial $V$ is positive on $(0,1)$ if and only if it is a nonnegative linear combination of polynomials $x^i(1-x)^j$. The problem is that it may involve terms where $i+j$ is bigger than $\deg V$. (This doesn't happen here, though.) I used an LP solver to play a little with $V$ and $\frac{V}{(1-x^2)^4x^{2n-2}}$. From that a pattern showed up which lead to a solution.
By the way, $V$ actually seems to be a nonnegative linear combination of $x^i(1-x)^{\deg V-i}$. This is equivalent to say that all the coefficients of $(1+x)^{\deg V}V(1/(1+x))$ are nonnegative. It is not hard to compute explicit expressions for these coefficients, but I don't see an easy arguments why they can't be negative.
# Formally compute W
var('X N')
P = (1+X^(4*N-1))*(1+X^(2*N))*(1-X^(2*N+1))
Q = (1+X^(2*N+1))*(1-X^(2*N+2))
V = P*Q^2*P.diff(X,2)+(P*Q.diff(X))^2-P^2*Q*Q.diff(X,2)-(P.diff(X)*Q)^2
a = 1024*N^2 - 4096/3*N + 320
b = 12288*N^3 - 14336*N^2 + 4864/3*N + 256
W = X^4*(V - (2*N + 1)^2 *(N + 1)^2*X^(24*N-1)*(1-X)^4*(a*X + b*(1-X)))
# Check that (1-X)^6 divides W(X)
print all(W.diff(X,i)(X=1).polynomial(QQ) == 0 for i in [0..5])
# Compute the coefficients a_ij for the exponents 2ni+j of W. Somewhat
# clumsy, as I don't know how to deal with polynomials where exponents
# are symbolic expressions.
#
# l is the list of summands of W
l = [z.canonicalize_radical() for z in W.expand().operands()]
K.<n> = QQ[]
aij = {(i,j):K(0) for i in [0..12] for j in [0..10]}
for term in l:
c = term(X=1) # get coefficient of term
e = (term.diff(X)/c)(X=1) # get exponent of term
c = K(c.polynomial(QQ)) # convert c to proper polynomial in n
i, j = ZZ(e.diff(N))//2, ZZ(e(N=0)) # Clumsy method to compute pairs (i,j)
aij[i,j] += c
# Check if coefficients aij[i,j] were correctly computed
Wnew = sum(c(n=N)*X^(2*N*i+j) for (i,j),c in aij.items())
print (W-Wnew).canonicalize_radical().is_trivial_zero()
def bino(k): # binomial coefficient binom(-6,k)=binom(k+5,5)
return prod(k+5-z for z in range(5))/120
# compute the coefficients of W(X)/(1-X)^6
K = K.extend_variables(('y', 'o'))
K.inject_variables()
for r in [0..12]:
for s in [0..10]:
d = 1 if s == 10 else 0
# compute coefficient of X^(2nr+s+d*y)
f = sum(aij[i,j]*bino(2*n*(r-i)+s+d*y-j) for i in [0..r]
for j in [0..s if i == r else 10])
# check non-negativity of the polynomial f
if d == 0:
# Checks if the coefficient of X^(2nr+s), which is a
# polynomial in n, has nonnegative coefficients upon replacing
# n with n+12
f = f(n=n+12)
if min(f.coefficients()+[0]) < 0:
print "False"
else:
# The coefficient of X^(2nr+10+y), which is a polynomial in n
# and y, has to be nonnegative for 0 <= 10+y <= 2n-1, so 10+y
# = 2n-1-o for non-negative o. So upon replacing n with
# (11+y+o)/2, we get a polynomial in y and o which has to be
# nonnegative for all nonnegative y and o. In all but one
# case, this holds because the coeffcients are non-negative.
f = f(n=(11+y+o)/2)
if min(f.coefficients()+[0]) < 0:
c = o^2 + 23*o*y + 1360*y^2 + 99*o + 340*y + 1675
print min(c.coefficients()+(c*f).coefficients()) >= 0
Denote $b/n=s$. We want a pointwise bound $$f(x)\leqslant f(s)+(x-s)f'(s),\label{1}\tag{$\heartsuit$}$$ then summing \eqref{1} up for $x=a_1,\ldots,a_n$ we get the desired inequality. Note that if $s<a$ (that holds for $n>b/a$) we get \eqref{1} on $[0,a]$ by concavity. For proving \eqref{1} on $[a,b]$, by convexity it suffices to verify \eqref{1} for $x=a$ and $x=b$. For $x=a$ this is already done, for $x=b$ it reads as
$$
f(b)\leqslant f(s)+(b-s)f'(s).
$$ When $n$ is large, RHS converges to $f(0)+bf'(0)$. Thus it suffices to check that
$$
f(b)<f(0)+bf'(0).
$$ Assume the contrary:
$$
f(b)\geqslant f(0)+bf'(0).
$$ We have $f(x)\leqslant f(0)+f'(0)x$ for all $x\in [0,a]$ by concavity. Denote by $c$ the endpoint of the maximal segment $[0,c]$ on which we have $f(x)\leqslant f(0)+f'(0)x$. Then $c\in [a,b]$ and we have $f(c)=f(0)+f'(0)c$ (otherwise $c$ is not maximal). This yields
$$
f'(c)=\lim_{x\to c-0}\frac{f(c)-f(x)}{c-x}\geqslant f'(0).
$$ Since $f'$ increases on $[a,b]$ by convexity we get $f'(b)\geqslant f'(c)\geqslant f'(0)$, a contradiction.
Best Answer
I don't think your question is a mathematical one, for the question about what do all inequalities eventually reduce to has a simple answer: axioms. I interpret it as a metamathematical question and still I believe the closest answer is the suggestion above about using everything you know.
An inequality is a fairly general mathematical term, which can be attributed to any comparison. One example is complexity hierarchies where you compare which of two problems has the highest complexity, can be solved faster etc. Another one is studying convergence of series, that is comparing a quantity and infinity, here you find Tauberian theory etc. Even though you did not specify in your question which kind of inequalities are you interested in primarily, I am assuming that you are talking about comparing two functions of several real/complex variables. I would be surprised if there is a list of exclusive methods that inequalities of this sort follow from. It is my impression that there is a plethora of theorems/principles/tricks available and the proof of an inequality is usually a combination of some of these. I will list a few things that come to my mind when I'm trying to prove an inequality, I hope it helps a bit.
First I try to see if the inequality will follow from an equality. That is to recognize the terms in your expression as part of some identity you are already familiar with. I disagree with you when you say this shouldn't be counted as a method to prove inequalities. Say you want to prove that $A\geq B$, and you can prove $A=B+C^2$, then, sure, the inequality follows from using "squares are nonnegative", but most of the time it is the identity that proves to be the hardest step. Here's an example, given reals $a_1,a_2,\dots, a_n$, you want to prove that $$\sum_{i,j=1}^n \frac{a_ia_j}{1+|i-j|} \geq 0.$$ After you realize that sum is just equal to $$\frac{1}{2\pi}\cdot\int_{0}^{2\pi}{\int_{0}^{1}{\frac{1-r^{2}}{1-2r\cos(x)+r^{2}}\cdot |\sum_{k=1}^{n}{a_{k}e^{-ikx}}|^{2}dx dr}}$$ then, yes, everything is obvious, but spotting the equality is clearly the nontrivial step in the proof.
In some instances it might be helpful to think about combinatorics, probability, algebra or geometry. Is the quantity $x$ enumerating objects you are familiar with, the probability of an event, the dimension of a vector space, or the area/volume of a region? There is plenty of inequalities that follow this way. Think of Littlewood-Richardson coeficients for example.
Another helpful factor is symmetry. Is your inequality invariant under permuting some of its variables? While I don't remember right now the paper, Polya has an article where he talks about the "principle of nonsufficient reason", which basically boils down to the strategy that if your function is symmetric enough, then so are it's extremal points (there is no sufficient reason to expect assymetry in the maximal/minimal points, is how he puts it). This is similar in vein to using Langrange multipliers. Note however that sometimes it is the oposite of this that comes in handy. Schur's inequality, for example is known to be impossible to prove using "symmetric methods", one must break the symmetry by assuming an arbitrary ordering on the variables. (I think it was sent by Schur to Hardy as an example of a symmetric polynomial inequality that doesn't follow from Muirhead's theorem, see below.)
Majorization theory is yet another powerful tool. The best reference that comes to mind is Marshall and Olkin's book "Inequalities: Theory of Majorization and Its Applications". This is related to what you call convexity and some other notions. Note that there is a lot of literature devoted to inequalities involving "almost convex" functions, where a weaker notion than convexity is usually used. Also note the concepts of Schur-convexity, quasiconvexity, pseudoconvexity etc. One of the simplest applications of majorization theory is Muirhead's inequality which generalizes already a lot of classical inequalities and inequalities such as the ones that appear in competitions.
Sometimes you might want to take advantage of the duality between discrete and continuous. So depending on which tools you have at your disposal you may choose to prove, say the inequality $$\sum_{n=1}^{\infty}\left(\frac{a_1+\cdots+a_n}{n}\right)^p\le \left(\frac{p}{p-1}\right)^p \sum_{n=1}^{\infty}a_n^p$$ or it's continuous/integral version $$\int_{0}^{\infty}\left(\frac{1}{x}\int_0^x f(t)dt\right)^p dx \le \left(\frac{p}{p-1}\right)^p \int_{0}^{\infty} f(x)^p dx$$ I've found this useful in different occasions (in both directions).
Other things that come to mind but that I'm too lazy to describe are "integration preserves positivity", uncertainity principle, using the mean value theorem to reduce the number of variables etc. What also comes in handy, sometimes, is searching if others have considered your inequality before. This might prevent you from spending too much time on an inequality like $$\sum_{d|n}d \le H_n+e^{H_n}\log H_n$$ where $H_n=\sum_{k=1}^n \frac{1}{k}$.