Letting primes denote derivatives with respect to $\theta$ and $A=C+C^T$, I concur with your calculations
$$
\eqalign{
&&\href{.}{\text{reasons:}}\\
Q''
&=\frac{d^2Q}{d\theta^2}
=\frac{d}{d\theta} \left(F^T\cdot A\cdot F\,'\right)\qquad
&
\href{http://en.wikipedia.org/wiki/Matrix_calculus#Identities}{\text{chain rule, }}
\href{http://en.wikipedia.org/wiki/Matrix_calculus#Derivative_of_quadratic_functions}{\frac{\partial{\mathbf{x}^T\mathbf{C}\mathbf{x}}}{\partial\mathbf{x}}=\mathbf{x}^T\left(\mathbf{C}+\mathbf{C}^T\right)}
\\
&=\left(\frac{d}{d\theta} F^T\cdot A\right)\cdot F\,'
&\href{http://en.wikipedia.org/wiki/Matrix_calculus#Identities}{\text{product rule}}\href{.}{\text{, assumption }F\,''=0}
\\
&=\left(F\,'\right)^T\cdot A \cdot F\,'
&\href{http://en.wikipedia.org/wiki/Matrix_calculus#Identities}{\text{product rule}}\href{.}{\text{, assumption }A\,'=0}
\\&&\text{or }
\href{http://en.wikipedia.org/wiki/Matrix_calculus#Identities}{\text{chain rule, }}
\href{http://en.wikipedia.org/wiki/Matrix_calculus#Derivative_of_linear_functions}{\frac{\partial{\mathbf{x}^T\mathbf{A}}}{\partial\mathbf{x}}=\mathbf{A}}
}
$$
The logic is all based on summation.
All the transposes are just what is necessary
because of the way we define matrix products.
For vectors $\vec{x},\vec{y}\in\mathbb{R}^n\cong\mathbb{R}^{n\times1}$
identified with $n\times1$ column matrices,
$$
\eqalign{
\vec{x}^T\cdot\vec{y}
&=\left[\matrix{x_1\\\vdots\\x_n}\right]^T \cdot
\left[\matrix{y_1\\\vdots\\y_n}\right]
\\
&=\left[\matrix{x_1&\cdots&x_n}\right] \cdot
\left[\matrix{y_1\\\vdots\\y_n}\right]
\\
&=\sum_{i=1}^n x_i y_i
}
$$
Tensor notation goes further and dispenses with the summation signs, but introducing a convention of superscript-subscript pairing in cancelling indices: $x_iy^i$ (or $x^iy_i$), for the above. But since this is a single real quantity, the normal product rule applies to its derivative, and then the matrix product rules mean we can write it the way they appear in multifarious sources. A good source would be a book on real analysis, vector calculus/analysis, tensor calculus/analysis or differential geometry. For a (real analysis) example, Rudin's Principles of Mathematical Analysis, but there are probably also many other good recommendations.
So the second derivative of a quadratic form in $\mathbf{x}(\theta)$ with constant matrix $C$, where $\mathbf{x}''=\mathbf{0}$, is another quadratic form, in $\mathbf{x}'$, with matrix $A=C+C^T$. Good to know.
As to the convergence, are the conditions for Newton's method met?
Is $Q'\ne0$ on a suitable interval $I$ of $\theta$? Do you know the value of
$$M=\sup_{\theta\in I}\frac12\left|\frac{Q''}{Q'}\right|,$$
your constant for quadratic convergence
(i.e. so that $|\epsilon_{n+1}|\le M\epsilon_n^2$)? And, lastly, do you have good initial estimates?
Also, what is the geometry (i.e. eigenvalues/-vectors, Witt index) of $C$ & $A$?
Here's an example application using Python / Numpy:
import numpy as np
# input vectors
v1 = np.array( [1,1,1,1,1,1] )
v2 = np.array( [2,3,4,5,6,7] )
# Gram-Schmidt orthogonalization
n1 = v1 / np.linalg.norm(v1)
v2 = v2 - np.dot(n1,v2) * n1
n2 = v2 / np.linalg.norm(v2)
# rotation by pi/2
a = np.pi/2
I = np.identity(6)
R = I + ( np.outer(n2,n1) - np.outer(n1,n2) ) * np.sin(a) + ( np.outer(n1,n1) + np.outer(n2,n2) ) * (np.cos(a)-1)
# check result
print( np.matmul( R, n1 ) )
print( n2 )
See the result here.
Best Answer
The general procedure is called tensor contraction. Concretely it's given by summing over various indices. For example, just as ordinary matrix multiplication $C = AB$ is given by
$$c_{ij} = \sum_k a_{ik} b_{kj}$$
we can contract by summing across any index. For example, we can write
$$c_{ijlm} = \sum_k a_{ijk} b_{klm}$$
which gives a $4$-tensor ("$4$-dimensional matrix") rather than a $3$-tensor. One can also contract twice, for example
$$c_{il} = \sum_{j,k} a_{ijk} b_{kjl}$$
which gives a $2$-tensor.
The abstract details shouldn't matter terribly unless you explicitly want to implement mixed variance, which as far as I know nobody who writes algorithms for manipulating matrices does.