[Math] find Linear Algebra in Nature

analytic geometrylinear algebra

I'm a Computer Science major and I've been studying Analytic Geometry and Linear Algebra this semester. Today my teacher gave a hell of an explanation talking about linear systems, quadratic functions, polynomial equations, derivatives, etc., etc.. In his examples, he talked about Ballistics, about space and all matter, about the physical "tissue" that covers every object, cars and its aerodynamics, and so on and I was basically tripping with so many crazy ideas.

I really, really loved this subject and have even been considering studying it more out of the Academia for curiosity purposes.

The thing is, I couldn't find more of these examples where linear algebra can be used in nature. I know about some possible uses in Computer Science such as Computer Graphics, A.I., cryptography and etc..

Can anyone shed a light?

Thanks!

Best Answer

Okay. There are loads of ways linear algebra can be used to look at aspects of nature. It's a tool, just like calculus, and thus can be used in many, many, many different ways. I will speak about 6: 3 relatively simple applications and 3 not-as-simple (or straightforward, at least) applications.

Systems of linear equations are sort of the bread and butter of linear algebra. So studies like chemistry, which involve (as we now know) discrete reactions of a certain amount of moles of some substances into new substances. It's actually nontrivial, as no chemical reaction occurs completely, and no measurement is taken perfectly. So finding out, for example, that combusting 1 part Methane in 1 part Oxygen returns two parts water, one part carbon dioxide, and some heat is true - but all observations of such things will be slightly off. So least squares can be used, too. Less trivial still is the idea that multiple substances are dissolved in different water containers, and then combined. If this is repeated with different concentrations, then one can use linear algebra to solve for both the volumes of all the solutions used and the chemical formula of the resulting compound in terms of the solution's compositions (this is a bit detailed, so I don't include an example).

This also has big implications in economics. In the so-called Leontief Closed Model of economics, there is an assumption that during the production period being examined, no goods leave or enter the system. Thus the numbers of goods produced and used form a closed system, and everything adds up pretty and linearly. Then the idea is that one can measure and predict the balance of the economy by knowing how many units of each good are produced by industries and how many are needed to produce other goods. The basic idea is that one compiles a matrix containing the necessary inputs and resulting outputs of goods for different industries (imagine each row is a different industry, or company, or producer, etc. and each column represents a different good, i.e. the basis is over the different products - and they're not orthogonal, as some products add together to give other products). Then one needs to be able to multiply this matrix by a goods-column vector and get that vector back for the system to be balanced. If it's not, then there are big problems (unbalanced systems indicate an unsustainable economy).

But there is more, as the Leontief Open Model, where goods are allowed to enter and exit the economy, can also be analyzed. A simple way of looking at it is to modify the above system. If $A$ is my producer/good matrix, and $p$ is my product vector, then the above equation was $Ap = p$. But we might introduce a demand vector $d$, indicating entrance or exit of goods. Then we look at $Ap + d = p$. This allows one to analyze the levels of production of the economy and many other aspects. Again, there are more complicated formulations with linear algebra, but I want to give breadth.

I will give one more relatively basic implication. Genetic trends can be efficiently modeled with matrices. In particular, if we know dominant traits and the number of genes affecting a genotype, then we can form a vector out of the estimated probabilities of each genotype occurring (probably through some sort of statistical sampling). If we form an outer product from this vector (make a matrix, of sorts), we have a probability matrix for the genotypes offspring of two parents. Finding eigenvalues allows one to analyze long term behavior. In addition, similar estimates and probability matrices allow us to infer past behavior.

Those were three lower-level applications (in my mind - nothing rigorous about these definitions). Now for three upper-level applications.

Oscillators - anything resembling harmonic motion - are often governed by linear systems. Why? Because of Hooke's Law - that says that the force of a spring is proportional (i.e. linear) to its displacement. One oscillator, though, is an intro mechanics question. But coupled oscillators are, to be honest, hard (in my opinion - some physicist is rolling his eyes at my answer now). For example, the configuration where we have a wall, a spring, a body, a spring, a body, a spring, a wall - each spring are identical and the track is frictionless - this system is governed by a linear set of equations. They are perhaps a little bit clever, but one can use eigenvalues to understand this system. In a more general manner, systems of differential equations are often understood through hefty amounts of calculus and linear algebra - so less trivial systems can be understood too (the goal is still to find the eigenvalues, too).

Sociologists are interested in analyzing relations inside of groups - group behavior. Sometimes, to do this, they make graphs (in the graph theory sense, not in the function-plotting sense) where each vertex represents an individual or a particular group. The edges between individuals can be given a weight and a direction (or perhaps a bidirection) representing influence or dominance (I'm not entirely sure how this is done, but I know it is - I suppose you'd have to ask a sociologist). Then one can make adjacency matrices (a matrix reflecting what individuals affect what individuals) and do funny things with them - squaring them will show what individuals have a long influence on the group. Cubing will show even longer influence, etc. The underlying concept here is that networks can often be modeled my graphs, and graph theory can use a lot of linear algebra. So communication networks, electric circuits, even terrorist networks (picture below) can often be examined with linear algebra. I want to note that in the network pictured below, when one computes powers of the adjacency matrix, all but 10 individuals influences die down almost immediately. But these 10 are all relatively equal in influence, so this is called a non-centralized network.

This is the 9/11 terrorist network - there are many interesting ideas here

If you've read this far, then I'm proud. This is far longer than I had anticipated. But so it goes. The last idea I want to mention is that linear algebra is not restricted to only basic results. I hope I have given that impression, but here is one more. There is a paper that explains how to derive the Lorentz transformations with linear algebra. Like most physics, it's a close approximation. I think the paper is interesting to those with only a basic familiarity with linear algebra, though some things might feel a little mystical.

Anyhow, I hope you found this interesting.

Related Question