For any field, we can define the exponent of matrix multiplication over that field to be the smallest number $\omega$ such that $n \times n$ matrix multiplication can be done in $n^{\omega + o(1)}$ field operations as $n \to \infty$. Schönhage proved that it is invariant under taking field extensions, so it depends only on the characteristic of the field. Probably it equals $2$ in every characteristic, but that isn't known. (Certainly $\omega \ge 2$, since it takes at least one operation to get each entry.)
Let $\omega_p$ be the exponent in characteristic $p$. Then one can show that $\limsup_{p \to \infty} \omega_p \le \omega_0$, basically because every low-rank tensor decomposition in characteristic $0$ will work for all but finitely many primes. Over the rationals, you just need to avoid primes that occur in denominators.
However, for small primes the exponent could (as far as we know) be better or worse than in characteristic $0$, and it's even possible that it could be substantially better for all primes, although in that case you can show that as $p \to \infty$ the asymptotics would have to take longer and longer to kick in (i.e., the size of the matrices needed to see the improvement would grow as a function of $p$).
Strassen's exponent $2.8$ algorithm works in every characteristic, and it's the only practical method that achieves exponent less than $3$. However, if you want to do this in practice, it's important to think about issues like cache and memory access. (These issues are often more important than counting arithmetic operations.) Unless you really know what you are doing, it's not worth trying to write your own code, except as a learning exercise. For example, in characteristic $2$ the M4RI code mentioned by unknown (google) seems like it could be a good bet.
If I understand the question right, by "constant" it is meant that $H$ is a fixed, but arbitrary positive definite matrix.
In general, I don't think that you can compute the matrix-vector product $Hv$ faster than $O(n^2)$. But if $H$ has structure (Toeplitz, Circulant, Strictly diagonally dominant, etc.), or if you are willing to settle for approximate answers, you can compute the product faster ("randomized linear algebra" is a good search term).
A very naive approach is the following: Suppose that $v$ is some vector, and instead of using $H$, you use $H_k$, the top-$k$ rank approximation to $H$. Then, $H_kv$ can be computed in time $O(nk)$. The error of this computation is $\|Hv-H_kv\| \le \|H-H_k\|\times\|v\|$, which is fairly easy to characterize.
Best Answer
Here are some resources I found useful while learning about this stuff.
Knuth, The Art of Computer Programming Vol 2 contains a series of exercises that lead you through an $o(n^{2.5})$ matrix multiplication algorithm. Unfortunately I don't have a copy in front of me, so I can't tell you the specific exercises.
A nice exposition by Andrew Stothers: http://www.maths.ed.ac.uk/~s0237198/report1styr.pdf
EDIT:
There are some other lecture notes out there that I can't seem to dig up at the moment.Here they are: http://www-cc.cs.uni-saarland.de/teaching/SS09/ComplexityofBilinearProblems/script.pdfIf you search for "strassen laser method" you will find more nice hits. In principle, "schoenhage tau theorem" should also yield results, but it doesn't seem to. (These are the two prior results that Coppersmith-Winograd build on.)