Your work looks pretty good, but you can actually take it a step further since you know $r_i'(t)$. When you substitute in this information, each integral depends only on one component of $\vec V$, but not both. For instance
$$\int^{b_1}_{a_1} \vec V(\vec r_1(t)) \cdot r_1'(t) \; dt
=\int^{b_1}_{a_1} u(\vec r_1(t)) \; dt$$
The next task is to write a routine to implement the function $\vec V$, that is, you want a function from $\mathbb{R}^2\rightarrow\mathbb{R}^2$. Use something like interp2
(though this may not be the most efficient). Once you have this accomplished, simply evaluate the now 1-d integrals using the quad
function, for instance.
The way you created the theta vector can be used in both cases. The important difference, and why the course slide used the transpose of $\theta$ and you didn't in the function is that the order of operations is the opposite, with you using multiplication and the slide function using dot product. To see this, first consider what you've done. In this case, $X$ is a $m \times 2$ matrix, $y$ is $m \times 1$ matrix and $\theta$ is $2 \times 1$ matrix. As such, $X \theta$ is the multiplication of a $m \times 2$ matrix and a $2 \times 1$ matrix, resulting in a $m \times 1$ matrix. This is important because it matches the dimensions of $y$. As such, the resulting request of "X*theta - y" can be done properly, resulting in another $m \times 1$ matrix, that is further manipulated by squaring each term, summing all of the terms and then dividing by $2m$.
As implemented in your function, for each row $x_i$, you are multiplying that row with $\theta$ to get, via matrix multiplication,
$$x_i \times \theta = \left(x_{i0} \; x_{i1}\right) \times {\theta_0 \choose \theta_1} = \left(x_{i0}\theta_0 + x_{i1}\theta_1\right) \tag{1}\label{eq1}$$
In the course slide, note that $\theta^T$ is the transpose, so it's a $1 \times 2$ matrix. Thus, for a row $x_i$, note that $\theta^T x_i$ becomes, if matrix multiplication is used again,
$$\theta^T \times x_i = \left(\theta_0 \; \theta_0\right) \times \left(x_{i0} \; x_{i1}\right) \tag{2}\label{eq2}$$
This doesn't work with matrix multiplication as the # of rows of the left multiplicand, i.e., $1$, doesn't match the # of columns of the right multiplicand, i.e., $2$. However, it does work as a dot product, with the RHS of \eqref{eq2} then instead becoming
$$\left(\theta_0 \; \theta_0\right) \cdot \left(x_{i0} \; x_{i1}\right) = \theta_0 x_{i0} + \theta_1 x_{i1} \tag{3}\label{eq3}$$
This is basically the same as \eqref{eq1} in terms of the resulting value. I believe the slide's $h_{\theta}$ function is implemented this way as the result of a dot product is defined to be a scalar, while technically the result of the matrix multiplication in \eqref{eq1} is actually a $1 \times 1$ matrix, which is why I show it using brackets. Regardless, in either case, to match the slide's $h_{\theta}$ function value requires that $x_{i0} = 1$ for all $i$, as is done & noted in your question text.
As you can see, what you're doing is basically equivalent to what the slide function proposes in terms of resulting numeric value. However, I assume it's likely simpler and easier to use matrix multiplication in the MatLab/Octave code, as you can use just one line & let the program environment handle the details of the matrix manipulations behind the scenes.
If you really want your code to match the slide's function more closely, you could of course define $\theta$ as a $1 \times 2$ matrix instead and use its transpose in the code, although you need to keep the same order of operations. Alternatively, to completely match what the slide has, you need to use not only $\theta^T$, but also have the code do a dot product instead. One final way is to have, in \eqref{eq2}, the $x_i$ matrix be of size $2 \times 1$ instead of $1 \times 2$, so matrix multiplication would then work. However, I believe this would complicate various processing, so I wouldn't recommend implementing it this way.
Best Answer
You essentially have eight independent quadrature problems (the rows of
T
). You have several options. Since you didn't provide runnable code, some of the code below may need to be tweaked. And if other issues crop up you'll have to deal with those.1. Break up the problem and iterate over the rows using a
for
loop andquad
(you might also tryquadgk
which performs better in many cases):2. Use vectorized quadrature via the
quadv
function:Note that the Matlab
quadv
function is scheduled to be removed in a future release.3. Use Matlab and the newer
integral
function (uses adaptive Gauss-Kronrod quadrature likequadgk
) with the'ArrayValued'
option:One final thing. Do you really need to be calculating an explicit inverse of a matrix? This is very rarely necessary. It leads to less numerical precision and, in some cases, numerical instability. You might look at you can modify your problem to use typical linear solution methods, e.g.,
\
(mldivide
) orlinsolve
.