I'm not going to try and answer all of your questions because it's really a very broad question. I will at least give you a start and then the best thing you can do is either take another course or open a book and learn some things, coming back to ask more specific questions along the way.
Surely you are familiar with equations like $5x = 6$ where we want to find all solutions $x$ where $x$ is in some field, say in this case the real numbers. In this case every nonzero element has an inverse so you can multiply by $1/5$ to get $x = 5/6$.
Although linear algebra isn't really just about solving equations, that's where it starts. It's called linear because we only want to solve equations that are linear in the unknown variable. The simplest case would be something like
$$
x + y = 4,
2x - y = -1
$$
Actually we can just write this in matrix form. If you remember how to multiply a matrix, then we can write this system as $Ax = b$:
$$
\begin{pmatrix}
1 & 1\\
2 & -1
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=
\begin{pmatrix}
4\\
-1
\end{pmatrix}
$$
The reason why we multiply matrices is because we want to solve $Ax = b$ by multiplying by the inverse of $A$ to get $x = A^{-1}b$. Of course, not all matrices have inverses. So the set of all $n\times n$ matrices, with addition and multiplication is a ring but not a field. A ring is sort of like a field but now we remove the requirement where inverses exist for all nonzero elements. Also matrix multiplication is not commutative: $AB$ is not necessarily equal to $BA$.
In the above case $A$ does have an inverse, and you can multiply on the left by $A^{-1}$ (see if you can find it) to get the solution to this system of equations.
Thus we have found: multiplication of matrices helps us solve equations.
However, we are only beginning because finding the inverse of a matrix is tricky, so we study the different ways to represent matrices and calculate with matrices in order to more efficiently move them around. This is a bit vague but intentionally so since there is so much mathematics going on in the background which you need to learn.
Linear algebra is really about vector spaces. To appreciate the idea of a vector space you should first get some experience with abstraction by doing hundreds of problems. A vector space is just a set of elements together with addition and scalar multiplication that satisfy certain axioms. It turns out that matrices correspond to maps between vector spaces in a chosen basis of that space. This may not make too much sense to you now, but the important point is that putting matrices in different forms corresponds to changing the basis of the vector space in different ways.
The reason why we like to use vector spaces is because then we can concentrate on the algebraic properties of vector spaces without having to worry about specific numbers or equations, which then can be applied to all sorts of problems which have little do with solving equations.
The best thing you can do to understand linear algebra is to take a course/read a book and just start solving problems. It is impossible to really understand what it is about first and then practice doing it. The understanding comes with the practice.
One of the most frequent occasions where linear systems of $n$ equations in $n$ unknowns arise is in least-squares optimization problems. Let us look at an example. Let's say that we are studying two physical quantities $y$ and $x$ and we conjecture that $y$ is a second order polynomial function of $x$, i.e. $y=\alpha x^2 + \beta x + \gamma$ for some real numbers $\alpha$, $\beta$, $\gamma$ that are unknown. Let's say now that we perform experiments and obtain measurements $(x_1,y_1) \cdots (x_{100},y_{100})$. Applying the polynomial model on the measurements yields $y_i=\alpha x_i^2 + \beta x_i + \gamma$ for $i=1, \cdots 100$ or in matrix form $X k=y$ where $k=[\alpha \, \, \beta \, \, \gamma]^T$, $y=[y_1 \cdots y_{100}]^T$ and the $i^{th}$ row of $X$ is the row vector
$[x_i^2 \, \, x_i \, \, 1]$. Now, as you might observe, we have $100$ equations in $3$ unknowns, i.e. our linear system $X k=y$ is overdetermined. Practically speaking, this system is consistent (i.e. it has a solution) only if indeed $y$ is related to $x$ via a second order polynomial equation (i.e. our conjecture is true) and additionally there is no noise in our measurements. So assume that none of the above two conditions is true. Hence the system $X k=y$ will not in general have a solution and one might consider finding a vector $k$ that instead minimizes $||X k - y||_2^2$, i.e. the square of the error. Then the solution of this optimization problem is the solution to the $3 \times 3$ system $X^T X k = X^T y$. This formulation comes up all the time in engineering, e.g. in signal prediction. So, least squares problems lead to square (i.e. $n \times n$) linear systems of equations.
Best Answer
Multiplying by $e^{2x}$ both sides gives $$1-e^{2x}=-3xe^{2x}$$ $$e^{2x}(3x-1)=-1$$ $$e^{2x}(2x-\frac23)=-\frac23$$ $$e^{2x-\frac23}(2x-\frac23)=-\frac23 e^{-\frac23}$$ $$2x-\frac23=W_n(-\frac23 e^{-\frac23})\,\,\, n\in\mathbb{Z}$$ $$x=\frac13+\frac12W_n(-\frac23 e^{-\frac23})\,\,\, n\in\mathbb{Z}$$ There are infinite values of the Lambert-W function and in this case there are two real valued solutions - the principal value of the Lambert-W function $W_0(-\frac23 e^{-\frac23})=-\frac23$ and the value $W_{-1}(-\frac23 e^{-\frac23})\approx-1.429355228 $