You are correct on some accounts but there seems to be a bit of confusion as well. Let us first address the example in your first question.
Suppose we have a linear operator $T:V\rightarrow V$ where $\dim V = n$ for odd $n$. Let us fix a basis $\mathcal{B}$ for $V$ and let $A = [T]_\mathcal{B}$ be the matrix of the mapping with respect to $\mathcal{B}$. As you've said, the map $[\ ]_\mathcal{B}: L(V)\rightarrow M_n(\mathbb{R})$ is a vector space isomorphism between the space of operators on $V$ and the space of $n\times n$ matrices.
First of all, note that you are not "free to choose" $V$ to be $\mathbb{R}^n$. $T$ is already defined to be a linear operator on $V$ and in this case $V$, whatever it is, is fixed. However, the power of interpreting the mapping as a matrix is that we can effectively carry out all the calculations as if the mapping were from $\mathbb{R}^n$ to $\mathbb{R}^n$: This is precisely what an isomorphism allows us to do.
For example. suppose we have a linear mapping $T$ on $P_2(\mathbb{R})$, the vector space of polynomials with real coefficients of degree at most $2$:
$$T(ax^2 + bx + c) = bx + c$$
In this case, our vector space $V$ is $P_2(\mathbb{R})$. We are not free to change it. However, what we are allowed to do is to study the matrix
$$A=\begin{pmatrix}0 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$$
which is just the matrix representation of $T$ with respect to the standard basis $\{x^2,\ x,\ 1\}$. The point here is that $A$ is not $T$. It is a representation of $T$ which happens to share many of the same properties. Therefore by studying $A$, we gain valuable insight into the behaviour of $T$. For example, one way of finding eigenvectors for $T$ would be to find the eigenvectors of $A$. The eigenvectors of $A$ then correspond uniquely via isomorphism to the eigenvectors of $T$.
You ask "If I'm given a matrix with entries in $\mathbb{F}$, how exactly would I go about determining information about it from linear maps?", but this question is a little backwards. If we have a matrix, then its information is readily available to us. For example, a huge amount of information can be obtained by simply row reducing the matrix. In general, it is easier to study matrices than to study abstract linear transformations and this is precisely why we represent linear transformations with matrices.
The bottom line is that matrices serve as simpler representatives for linear mappings. Given an arbitrary linear mapping, we can fix basis for the domain and codomain and obtain a corresponding matrix representation for the mapping. Conversely, for a given choice of basis, each matrix can also be interpreted as a general linear map. However, we seldom use the latter fact since it is easier to work with matrices than general linear mappings.
Some of your questions were a little hard to interpret so I hope I have addressed your main concerns here. Please do not hesitate to ask for clarification.
a) $P$ invertible:
a1) Then
$$x_1=x_2\Longleftrightarrow y_1=Px_1=Px_2=y_2$$
a2) $$ x_1\neq x_2 \Longleftrightarrow y_1=Px_1\neq Px_2=y_2 $$
b) $P$ singular:
$$x_1=x_2\Longrightarrow y_1=Px_1=Px_2=y_2$$
but the inverse does not hold because an infinite number of different $x_i$'s can be mapped to the same $y$. If $v$ is the eigenvector that corresponds to the zero eigenvalue of $P$ then $Pv=0$. Consider now the maps of $x_1$ and $x_1+av$ for all $a\in\mathbb{R}$. Then
$$y_2=Px_2=P(x_1+av)=Px_1+aPv=Px_1=y_1$$ even though $x_1\neq x_2$ for $a\neq 0$.
Best Answer
As Harry says, you can't (the example of affine transformations can be tweaked to work because they're just linear ones with the origin translated). However, approximating a nonlinear function by a linear one is something we do all the time in calculus through the derivative, and is what we often have to do to make a mathematical model of some real-world phenomenon tractable.