Disclaimer: the comment section is overgrowing so here is an answer that I hope will erase all your doubts.
First part:
As said in the comment section, $(1)$ is a inner product on the vector space $\Lambda^p_xM$ while $(2)$ is an inner product on the vector space $\Omega^p(M)$. The link between them is that $(2)$ is obtained by integrating $(1)$ over the whole manifold $M$.
An analog is this: consider the set $M=[0,1]$. Then for each $x\in [0,1], T_x[0,1] = \mathbb{R}$ and one can define an inner product on $T_x[0,1] = \mathbb{R}$ by $\langle a,b \rangle_x = a\times b$. This is $(1)$.
A vector field on $[0,1]$ is just a smooth function $f:[0,1] \to \mathbb{R}$, and $(2)$ is here
$$
\langle f,g\rangle = \int_0^1 f(x) g(x) \mathrm{d}x = \int_0^1 \langle f, g\rangle_x \mathrm{d}x.
$$
Second part:
If $V$ is a vector space and $\langle\cdot,\cdot\rangle$ is an inner product, one can create a Riemannian metric on $V$, thought as a manifold the following way. As a vector space, the tangent bundle of $V$ is trivial:
$$TV = V\times V$$
and one can define the Riemannian metric $g_v = \langle\cdot,\cdot\rangle$ for $v\in V$. It is a constant Riemannian metric because the canonical trivialization makes $g_v$ to be a function independant of $v \in V$.
Take $V = \Omega^p(M)$ and $\langle \alpha,\beta\rangle = \int_M \alpha \wedge \star \beta$. Now, forget that $V$ and $\langle\cdot,\cdot\rangle$ is defined thanks to a Riemannian manifold $(M,g)$ and just look at its structure: it is a vector space with an inner product. Hence, for this inner product $\|\alpha\|$ is a number.
If you really want to think of this construction as a Riemannian manifold, like in the first paragraph, then $\|\alpha\|$ will be a function:
$$
\|\alpha\| : \beta \in \Omega(M)^p \mapsto \|\alpha\|(\beta) = \|\alpha\|\in \mathbb{R}
$$
which is constant and does not take points of $M$ as entries.
Comment: if you really do not understand what I said, here is just a question for you: for $x \in M$, how would you define $\left(\int_M \alpha\wedge \star \beta\right)(x)$?
This is the exact same thing as this question: how would you define $\left(\int_0^1 t^2 \mathrm{d}t\right)\left(\frac{1}{2}\right)$?
$
\newcommand\Ext{{\textstyle\bigwedge}}
\newcommand\trans[1]{#1^{\mathrm T}}
\newcommand\form[1]{\langle#1\rangle}
\newcommand\K{\mathbb K}
\newcommand\Tensor{{\textstyle\bigotimes}}
\newcommand\tensor\otimes
\newcommand\lintr{\mathbin\lrcorner}
\newcommand\End{\mathrm{End}}
\newcommand\EndOp{\End_{\mathrm{op}}}
$
I'm going to start with demonstrating how my comment gives the desired result, then prove the key formula, then give an abstract approach to constructing the inner product on $\Ext V$ that doesn't involve any coordinates or bases.
Generalized Cauchy-Binet Formula
The key result in your notation is
$$
(AB)(I, K) = \sum_{J\nearrow} A(I, J)B(J, K),
$$
from which it follows that
$$
\sum_{I\nearrow} Q(I, J)Q(I, K)
= \sum_{J\nearrow} \trans Q(J, I)Q(I, K)
= (\trans QQ)(J, K)
= \delta_{J,K},
$$
so the RHS of your last equation becomes
$$\begin{aligned}
&\sum_{I\nearrow}\left(\sum_{J\nearrow}a_JQ(I, J)\right)
\left(\sum_{J\nearrow}b_JQ(I, J)\right)
\\
&\qquad= \sum_{J\nearrow}\sum_{K\nearrow}a_Jb_K\sum_{I\nearrow}Q(I, J)Q(I, K)
\\
&\qquad= \sum_{J\nearrow}\sum_{K\nearrow}a_Jb_K\delta_{J,K}
\\
&\qquad= \sum_{J\nearrow}a_Jb_J.
\end{aligned}$$
Proof
From here on, we will only talk about the exterior algebra over an $n$-dimensional vector space $V$ with field of scalars $\K$ equipped with a symmetric bilinear form $\form{\cdot,\cdot} : V\times V \to \K$; the Riemannian structure plays no part in this beyond giving us this form on the tangent space, i.e. the metric.
The exterior algebra $\Ext V$ can be defined as the unique-up-to-isomorphism associative algebra with inclusion $V \to \Ext V$ that has the following universal property: for any associative algebra $A$ and linear $f : V \to A$ such that $f(v)^2 = 0$ for all $v \in V$, there is a unique algebra homomorphism $g : \Ext V \to A$ such that $g(v) = f(v)$ for $v \in V$. In other words, $f$ uniquely lifts to a homomorphism on $\Ext V$.
Given any linear $f : V \to V$, we can widen the codomain and consider it as a map $V \to \Ext V$, and trivially $f(v)\wedge f(v) = 0$. So there is a unique extension of $f$ to a homomorphism $\Ext V \to \Ext V$, the outermorphism $f_\wedge$. Since they take vectors to vectors, outermorphisms are grade-preserving. The space $\Ext^{\!n} V$ is one-dimensional, so there is a unique scalar, the determinant $\det(f)$, such that
$$
f_\wedge(I) = (\det f)I,\quad I \in \Ext^{\!n} V.
$$
Every simple multivector $J$ uniquely determines a subspace $[J] \subseteq V$ by
$$
v \in [J] \iff v \wedge J = 0.
$$
Given simple $m$-vectors $J, K$, there are orthogonal projections $P_J, P_K$ onto $[J], [K]$; the $(J,K)$-minor $\det_{J,K}(f)$ of $f$ is the unique scalar such that
$$
(P_J\circ f\circ P_K)_\wedge(K) = (\det_{J,K}f)J.
$$
You should convince yourself that the minors of the matrix of $f$ in an orthonormal basis $\{e_i\}$ are exactly the $(J,K)$-minors where
$$
J = e_1\wedge\cdots\wedge e_{j-1}\wedge e_{j+1}\wedge\cdots e_n,\quad
K = e_1\wedge\cdots\wedge e_{k-1}\wedge e_{k+1}\wedge\cdots e_n
$$
for each $j, k$. In fact, if $J = e_{j_1}\wedge\cdots\wedge e_{j_m}$ and $K = e_{k_1}\wedge\cdots\wedge e_{k_m}$, then $\det_{J,K}(f)$ is the determinant of the matrix with entries $(f_{j_a,k_b})_{a,b=1}^m$.
It is now straight-forward to prove the generalized Cauchy-Binet formula. Let $J, L$ be simple $m$-vectors, and suppose $U = [K_1]\oplus\cdots\oplus[K_k] \subseteq V$ is an $m$-dimensional subspace with $K_1,\dotsc, K_k$ simple $m$-vectors and $[K_1], \dotsc, [K_k]$ mutually orthogonal. Then for any $g : V \to U$ and $f : U \to V$
$$\begin{aligned}
\Bigl[(P_J)_\wedge\circ f_\wedge\circ g_\wedge\circ(P_L)_\wedge\Bigr](L)
&= [(P_J)_\wedge\circ f_\wedge]\left(\sum_{i=1}^k K_i\det_{K_i,L}g\right)
\\
&= \sum_{i=1}^k [(P_J)_\wedge\circ f_\wedge\circ (P_{K_i})_\wedge](K_i)\det_{K_i,L}g
\\
&= J\sum_{i=1}^k(\det_{J,K_i}f)(\det_{K_i,L}g),
\end{aligned}$$
where we've used the fact the outermorphism of a composition is the composition of the outer morphisms, and that $K_i = (P_{K_i})_\wedge(K_i)$. The first equality follows since $(P_U)_\wedge = \sum_{i=1}^k(P_{K_i})_\wedge$ (which is specifically a result of the $[K_i]$ be mutually orthogonal). Hence
$$
\det_{J,L}f\circ g = \sum_{i=1}^k(\det_{J,K_i}f)(\det_{K_i,L}g).
$$
The Natural Pairing $\Ext V^*\times\Ext V$
Let $\Tensor V$ denote the tensor algebra of $V$. It has the universal property that every linear map $V \to A$ with $A$ an associative algebra extends to a homomorphism $\Tensor V \to A$. We will not distinguish the original map from the extended map.
Let $\omega : V \to V$ be a linear involution; considered as map into $\Tensor V$, it extends to an algebra involution on $\Tensor V$. Every linear $f : V \to \Tensor V$ then defines a unique $\omega$-derivation $\Omega_f : \Tensor V \to \Tensor V$ by
$$
\Omega_f(1) = 0,\quad \Omega_f(v) = f(v),\quad
\Omega_f(v\tensor X) = f(v)\tensor X + \omega(v)\tensor \Omega_f(X)
$$
for $v \in V$ and $X \in \Tensor^kV$.
It can be confirmed that this gives a well defined map, and that it is indeed a derivation:
$$
\Omega_f(X\tensor Y) = \Omega_f(X)\tensor Y + \omega(X)\tensor\Omega_f(Y).
$$
The exterior algebra can be realized as a quotient $\Ext V = \Tensor V/I$ where $I$ is the two-sided ideal generated by $\{v\tensor v \;:\; v \in V\}$. As such, every map $f : V \to \Ext V$ lifts to a (non-unique) map $f' : V \to \Tensor V$, which gives a unique derivation $\Omega_{f'}$. It can be confirmed that $\Omega_{f'}$ maps $I$ to $I$ if
$$
\Omega_{f'}(v\tensor v) = f'(v)\tensor v + \omega(v)\tensor f'(v) \in I
$$
for all $v \in V$, and this condition descends to $\Ext V$ as
$$
f(v)\wedge v + \omega(v)\wedge f(v) = 0.
$$
Under this condition, $f$ extends to a unique $\omega$-derivation on $\Ext V$.
Now, for any $\alpha \in V^*$ define $\alpha\lintr v = \alpha(v)$. We choose $\omega(v) = -v$ and confirm
$$
(\alpha\lintr v)\wedge v + (-v)\wedge(\alpha\lintr v)
= (\alpha\lintr v)v - (\alpha\lintr v)v
= 0.
$$
So $\alpha\lintr$ extends to an antiderivation; note that it takes grade $k$ to grade $k-1$. So now we have a map
$$
\alpha \mapsto \alpha\lintr : V^* \to \End(\Ext V)
$$
into the linear endomorphisms of $\Ext V$. Here we have a choice; we can take this as is, or map it into the opposite algebra $\EndOp(\Ext V)$ (which is $\End(\Ext V)$ with its multiplication reversed). It turns out the inner product you want corresponds to the $\EndOp(\Ext V)$ choice. We can confirm that
$$
[(\alpha\lintr)\circ(\alpha\lintr)](X) = \alpha\lintr(\alpha\lintr X) = 0
$$
for all $X$, and so $\alpha \mapsto \alpha\lintr$ extends to a homomorphism $\Ext V^* \to \EndOp(\Ext V)$, i.e. an antihomomorphism $\Ext V^* \to \End(\Ext V)$ which acts like a homomorphism but reverses products. This property implies
$$
\omega\lintr X \in \Ext^{\!l-k}V \quad\text{for}\quad \omega \in \Ext^{\!k} V^*,\quad X \in \Ext^{\!l} V.
$$
We finally define the natural pairing $\Ext V^*\times\Ext V \to \K$ by
$$
\form{\omega, X} = \form{\omega\lintr X}_0
$$
where $\form\cdot_0$ selects the scalar component. For $\omega^1,\dotsc,\omega^k \in V^*$ and $v_1,\dotsc,v_k \in V$, we can use the fact that $\omega^i\lintr$ is an antiderivation to show that
$$\begin{aligned}
\form{\omega^1\wedge\cdots\wedge\omega^k,\:
v_1\wedge\cdots\wedge v_k}
&= (\omega^1\wedge\cdots\wedge\omega^k)\lintr(v_1\wedge\cdots\wedge v_k)
\\
&= \omega^k\lintr\omega^{k-1}\lintr\cdots\lintr\omega^1\lintr(v_1\wedge\cdots\wedge v_k)
\\
&= \det\bigl(\omega^i(v_j)\bigr)_{i,j=1}^k,
\end{aligned}$$
where we've used the convention that $\lintr$ is right-associative. The musical isomorphism $\sharp, \flat$ extend to an isomorphism between $\Ext V^*$ and $\Ext V$, and allow us to transfer this pairing to those spaces:
$$
\form{X, Y} = \form{X^\flat\lintr Y}_0,\quad \form{\omega, \eta} = \form{\omega\lintr\eta^\sharp}.
$$
Best Answer
I didn’t fully read it, but yes that’s essentially it. Given a $k$-form $\omega$ such that $\langle \omega,\cdot\rangle_{L^2}=0$, you fix a point $p\in M$, and a sufficiently small open neighborhood $U$ of $p$, on which we have a pointwise ‘orthonormal’ frame of vector fields, which gives rise to 1-forms which gives rise to ‘orthonormal’ k-forms $\{\alpha^{i_1}\wedge\cdots\wedge \alpha^{i_k}\}$. Note that verifying that the wedges of the 1-forms are ‘orthonormal’ with respect to the induced pseudo-inner product on $\Lambda^k(T^*M)$ takes a bit more work in the non-Riemannian case… you can’t just say that oh because it’s a subspace of $T^0_k(TM)$ the restriction is non-degenerate (e.g in 2D Minkowski, restricting to the 45 degree lines gives the zero tensor which is as far as you can get from non-degeneracy). In any case, this is a linear algebra fact so do it on one vector space first.
From here, it’s a matter of reducing to the case of the classical ‘fundamental lemma of calculus of variations’. Fix an arbitrary smooth function $f:M\to\Bbb{R}$ with support that is compact and contained in $U$, and fix an increasing multindex $I=(i_1,\dots, i_k)$, and consider the $k$-form $f\alpha^I\equiv f\alpha^{i_1}\wedge\cdots\wedge \alpha^{i_k}$. Ok, very strictly speaking, I’m abusing notation slightly here, because $f$ is defined on all of $M$, while $\alpha^I$ is only defined on $U$, so the product is a-priori only defined on $U$; but note that since $f$ has compact support, I can define it to be $0$ outside $U$, and the result is a smooth $k$-form on $M$. Now, we compute: \begin{align} 0&=\langle\omega,f\alpha^I\rangle_{L^2} =\int_M\pm\omega_If\,dV_g =\int_U\pm\omega_If\,dV_g, \end{align} where the first equality is by assumption, the second by ‘orthonormality’ (the $\pm$ sign is given by $\langle\alpha^I,\alpha^I\rangle_{\bigwedge^k(T^*M)}$ (a constant function with value $1$ or $-1$ due to ‘orthonormality’)), and the last equal sign is because $f$ has support in $U$ . Since $f$ is an arbitrary smooth function with compact support in $U$, this is back to the classical situation, so we conclude that $\omega_I=0$. Since the multindex $I$ was arbitrary, it follows $\omega=0$ on $U$. Finally, since $p$ was arbitrary, it follows $\omega=0$ on $M$, proving the non-degeneracy.
What makes your proof slightly unpleasant to read is that you’re doing too many things simultaneously. For instance, you’re defining bump functions explicitly, and you’re trying to reprove the classical fundamental lemma of calculus of variations (this is a standard result, so if you want, prove that first, and then just invoke it here, but don’t do one proof inside of another inside of another). In fact, you can drop the smoothness assumptions and show that if $\omega$ is an $L^2$ differential $k$-form on $M$ such that $\langle\omega,\cdot\rangle_{L^2}=0$, then $\omega=0$ a.e on $M$ (with respect to the measure $dV_g$); the proof is virtually unchanged.