About a year ago I took a Linear Algebra class that was required for my degree. Unfortunately that class had an unidentified pre-requisite and started at a much higher level then I really needed. Going in I had no prior experience with linear algebra. I can definitely see how understanding linear algebra would be a very good thing to have in my field so I've been trying to piece it together ever since and have felt like I'm close but I just don't quite get it. I understand that linear algebra is a way to solve a lot of equations rapidly… In my mind this seems like that means finding values for the variables… but it didn't seem like we ever did… Instead we were doing things like multiplication of matrices and that made no sense. Or we would apply advance algorithms to get the matrix into certain forms which the reason for never made sense to me. So what does it mean to solve a system of equations? What are some real world examples that might make understanding linear algebra easier? Why are orthogonal and other types of matrices so special? Any insights, examples, suggestions are greatly appreciated!
[Math] happening in a linear algebra computation
linear algebra
Related Solutions
Let me elaborate on my comment.
If you're looking for some "linear algebra" treatment of PDEs you will get into the field called functional analysis. Functional analysis is some kind of infinite-dimensional linear algebra. Here we are working with function spaces for example the space of square integrable functions $L^2$ ($f$ is in $L^2$ if $\int |f|^2 < \infty$). It can be quickly seen that there will be no finite basis that spans the complete space, so our space is infinite dimensional.
So, we are working with function spaces, in PDE we are looking for function spaces where our solutions of the PDEs live. That is, we are finding functions that satisfy the equation. It turns out that looking at the derivative in a classical sense doesn't give you much tools to study certain properties of the solutions like regularity (how "differentiable" your function is) and so on. So we interprete the derivative in a "weak" (or distributional) sense. In this way we have a larger class of functions that satisfies our equation and we can apply the tools of functional analysis to that. These spaces are called the Sobolev spaces. If you would like me to explain more in detail why we would study those spaces, do ask.
This was actually a short summary why we would need functional analysis. Another thing you will need is measure theory. Measure and integration theory studies a different type of integral than the one you're used to (the Riemann integral) namely the Lebesgue integral. This integral has many more nice properties for example you have nice theorems that state that under mild conditions on $f_n, f$ that $$\int f_n \to \int f.$$ The Riemann integral also possesses this property, but under quite "unnatural" conditions (uniform convergence for example).
I'm not sure how far you exactly are but if I understood correctly, you're having the engineering mathematics knowledge.
So, you would need to know:
Basic Real Analysis:
Some examples:
- The blog by Terence Tao: http://terrytao.wordpress.org/ contains nice lecture notes. You're looking for something like:
- Bartle - Real Analysis. This book explains basic real analysis. Here you will get the formal definition of continuity, convergence, the Riemann integral and so on. This is really a prerequisite for measure theory.
- Pugh - Real Mathematical Analysis is another option as is
- Rudin - Principles of Mathematical Analysis. This books quite hard to study the subject from.
Measure Theory:
- Schilling - Measure, Integrals and Martingales. This is a quite inexpensive book and all the solutions are available online. It is a very gentle introduction. You will only need the measure theory bit, not the probability bit (with the martingales)
- Folland - Real Analysis. This is my favorite, it is also not very easy to learn the subject from but worth the effort. And last but not least:
- Rudin - Real and Complex Analysis. I suppose the first few chapters are sufficient, but here the same holds as for the other book by Rudin which I have mentioned.
- Terence Tao's blog. This website also has some nice (free!) notes.
Functional analysis:
- Werner - Funktionalanalysis. If you can read German, this is a gentle introduction. You only need to know the Banach spaces and Hilbert spaces, for the moment you will not need the topological vector spaces bit.
- Conway - A course in functional analysis. This is also quite dense, but might be worth the effort. You only need to know the basic theorems, but these books use a "measure theoretic" approach.
- Rynne and Youngson - Linear Functional Analysis. This is an undergraduate book, and I think it contains all you need.
Partial Differential Equations:
- Evans - Partial Differential Equations. This is a standard book for such courses, I have studied the subject as well from this book.
- Krylov - Lectures on Elliptic and Parabolic Equations in Sobolev Spaces. Is also nice if you prefer a more functional analytic approach (this books turns around the study of Sobolev spaces on domains or on the whole space).
This was some short list of suggestions. If you have any questions do ask!
If you want to do advanced computer vision, and not just implement algorithms, you will need to understand advanced algebraic concepts for linear transformations. You will also need to understand a bit of measure theory and analysis.
Why?
Because research level computer vision involves the development of algorithms. The development of these algorithms necessarily invokes the structural properties of the mathematical objects; properties such as measure, convergence, isometry, isomorphism, etc.
Furthermore, say you have the mechanical skills to develop a computational method. Any true research-level effort is also expected to demonstrate a proof of convergence, establish a domain in which the method is efficacious, compare the method to prior methods, and fundamentally compare the weaknesses and benefits.
This requires at least a solid understanding of graduate-level analysis and linear algebra.
Best Answer
I'm not going to try and answer all of your questions because it's really a very broad question. I will at least give you a start and then the best thing you can do is either take another course or open a book and learn some things, coming back to ask more specific questions along the way.
Surely you are familiar with equations like $5x = 6$ where we want to find all solutions $x$ where $x$ is in some field, say in this case the real numbers. In this case every nonzero element has an inverse so you can multiply by $1/5$ to get $x = 5/6$.
Although linear algebra isn't really just about solving equations, that's where it starts. It's called linear because we only want to solve equations that are linear in the unknown variable. The simplest case would be something like
$$ x + y = 4, 2x - y = -1 $$
Actually we can just write this in matrix form. If you remember how to multiply a matrix, then we can write this system as $Ax = b$:
$$ \begin{pmatrix} 1 & 1\\ 2 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 4\\ -1 \end{pmatrix} $$
The reason why we multiply matrices is because we want to solve $Ax = b$ by multiplying by the inverse of $A$ to get $x = A^{-1}b$. Of course, not all matrices have inverses. So the set of all $n\times n$ matrices, with addition and multiplication is a ring but not a field. A ring is sort of like a field but now we remove the requirement where inverses exist for all nonzero elements. Also matrix multiplication is not commutative: $AB$ is not necessarily equal to $BA$.
In the above case $A$ does have an inverse, and you can multiply on the left by $A^{-1}$ (see if you can find it) to get the solution to this system of equations.
Thus we have found: multiplication of matrices helps us solve equations.
However, we are only beginning because finding the inverse of a matrix is tricky, so we study the different ways to represent matrices and calculate with matrices in order to more efficiently move them around. This is a bit vague but intentionally so since there is so much mathematics going on in the background which you need to learn.
Linear algebra is really about vector spaces. To appreciate the idea of a vector space you should first get some experience with abstraction by doing hundreds of problems. A vector space is just a set of elements together with addition and scalar multiplication that satisfy certain axioms. It turns out that matrices correspond to maps between vector spaces in a chosen basis of that space. This may not make too much sense to you now, but the important point is that putting matrices in different forms corresponds to changing the basis of the vector space in different ways.
The reason why we like to use vector spaces is because then we can concentrate on the algebraic properties of vector spaces without having to worry about specific numbers or equations, which then can be applied to all sorts of problems which have little do with solving equations.
The best thing you can do to understand linear algebra is to take a course/read a book and just start solving problems. It is impossible to really understand what it is about first and then practice doing it. The understanding comes with the practice.