Okay. There are loads of ways linear algebra can be used to look at aspects of nature. It's a tool, just like calculus, and thus can be used in many, many, many different ways. I will speak about 6: 3 relatively simple applications and 3 not-as-simple (or straightforward, at least) applications.
Systems of linear equations are sort of the bread and butter of linear algebra. So studies like chemistry, which involve (as we now know) discrete reactions of a certain amount of moles of some substances into new substances. It's actually nontrivial, as no chemical reaction occurs completely, and no measurement is taken perfectly. So finding out, for example, that combusting 1 part Methane in 1 part Oxygen returns two parts water, one part carbon dioxide, and some heat is true - but all observations of such things will be slightly off. So least squares can be used, too. Less trivial still is the idea that multiple substances are dissolved in different water containers, and then combined. If this is repeated with different concentrations, then one can use linear algebra to solve for both the volumes of all the solutions used and the chemical formula of the resulting compound in terms of the solution's compositions (this is a bit detailed, so I don't include an example).
This also has big implications in economics. In the so-called Leontief Closed Model of economics, there is an assumption that during the production period being examined, no goods leave or enter the system. Thus the numbers of goods produced and used form a closed system, and everything adds up pretty and linearly. Then the idea is that one can measure and predict the balance of the economy by knowing how many units of each good are produced by industries and how many are needed to produce other goods. The basic idea is that one compiles a matrix containing the necessary inputs and resulting outputs of goods for different industries (imagine each row is a different industry, or company, or producer, etc. and each column represents a different good, i.e. the basis is over the different products - and they're not orthogonal, as some products add together to give other products). Then one needs to be able to multiply this matrix by a goods-column vector and get that vector back for the system to be balanced. If it's not, then there are big problems (unbalanced systems indicate an unsustainable economy).
But there is more, as the Leontief Open Model, where goods are allowed to enter and exit the economy, can also be analyzed. A simple way of looking at it is to modify the above system. If $A$ is my producer/good matrix, and $p$ is my product vector, then the above equation was $Ap = p$. But we might introduce a demand vector $d$, indicating entrance or exit of goods. Then we look at $Ap + d = p$. This allows one to analyze the levels of production of the economy and many other aspects. Again, there are more complicated formulations with linear algebra, but I want to give breadth.
I will give one more relatively basic implication. Genetic trends can be efficiently modeled with matrices. In particular, if we know dominant traits and the number of genes affecting a genotype, then we can form a vector out of the estimated probabilities of each genotype occurring (probably through some sort of statistical sampling). If we form an outer product from this vector (make a matrix, of sorts), we have a probability matrix for the genotypes offspring of two parents. Finding eigenvalues allows one to analyze long term behavior. In addition, similar estimates and probability matrices allow us to infer past behavior.
Those were three lower-level applications (in my mind - nothing rigorous about these definitions). Now for three upper-level applications.
Oscillators - anything resembling harmonic motion - are often governed by linear systems. Why? Because of Hooke's Law - that says that the force of a spring is proportional (i.e. linear) to its displacement. One oscillator, though, is an intro mechanics question. But coupled oscillators are, to be honest, hard (in my opinion - some physicist is rolling his eyes at my answer now). For example, the configuration where we have a wall, a spring, a body, a spring, a body, a spring, a wall - each spring are identical and the track is frictionless - this system is governed by a linear set of equations. They are perhaps a little bit clever, but one can use eigenvalues to understand this system. In a more general manner, systems of differential equations are often understood through hefty amounts of calculus and linear algebra - so less trivial systems can be understood too (the goal is still to find the eigenvalues, too).
Sociologists are interested in analyzing relations inside of groups - group behavior. Sometimes, to do this, they make graphs (in the graph theory sense, not in the function-plotting sense) where each vertex represents an individual or a particular group. The edges between individuals can be given a weight and a direction (or perhaps a bidirection) representing influence or dominance (I'm not entirely sure how this is done, but I know it is - I suppose you'd have to ask a sociologist). Then one can make adjacency matrices (a matrix reflecting what individuals affect what individuals) and do funny things with them - squaring them will show what individuals have a long influence on the group. Cubing will show even longer influence, etc. The underlying concept here is that networks can often be modeled my graphs, and graph theory can use a lot of linear algebra. So communication networks, electric circuits, even terrorist networks (picture below) can often be examined with linear algebra. I want to note that in the network pictured below, when one computes powers of the adjacency matrix, all but 10 individuals influences die down almost immediately. But these 10 are all relatively equal in influence, so this is called a non-centralized network.
If you've read this far, then I'm proud. This is far longer than I had anticipated. But so it goes. The last idea I want to mention is that linear algebra is not restricted to only basic results. I hope I have given that impression, but here is one more. There is a paper that explains how to derive the Lorentz transformations with linear algebra. Like most physics, it's a close approximation. I think the paper is interesting to those with only a basic familiarity with linear algebra, though some things might feel a little mystical.
Anyhow, I hope you found this interesting.
I personally do not think it is ideal to try to learn linear algebra from one text. My personal favorite text is the one by Gilbert Strang. It is very good at the conceptual aspects of the subject, and in particular focuses on abstract topics starting as early as chapter 2. By contrast, the main other text that I am familiar with, by David Lay, sticks to essentially computational topics until chapter 4 (although to be fair chapter 3 is rather short).
The downside to this is obvious: Strang's treatment of the basics, while well-written, is relatively terse. As a result, I think most students will struggle if they start with Strang. So I would suggest starting with another text (I really don't have a recommendation; I found Lay's book adequate but not excellent) and then moving on to Strang when you have grasped the basics.
I especially think Strang would ultimately be good for you in particular because you mention that you want more explanation rather than conciseness. Strang definitely provides that, with a lot of expository paragraphs in each chapter (except the first).
Best Answer
1) The following article is from The Great Soviet Encyclopedia (1979), which sums up the mathematical relations covering subjects of classical Linear Algebra:
Linear Algebra is the part of algebra that is most important for applications. The theory of linear equations was the first problem to arise that pertained to linear algebra. The development of the theory led to the creation of the theory of determinants and subsequently to the theory of matrices and the related theories of vector spaces and linear transformations in them. Linear algebra also encompasses the theory of forms, in particular, quadratic forms, and, in part, the theory of invariants and the tensor calculus. Some branches of functional analysis constitute a further development of corresponding problems of linear algebra associated with the passage from finite-dimensional vector spaces to infinite-dimensional linear spaces.
2) A popular and explanatory article: https://betterexplained.com/articles/linear-algebra-guide/
3) There is a fascinating map on the seventh page of the book you should definitely look at https://minireference.com/static/excerpts/noBSguide2LA_preview.pdf
4) Two book recommendations
a) I.M.Gelfand - Lectures on Linear Algebra
b) S. Axler - Linear Algebra Done Right