Most introductory books on Linear Algebra have a Theorem which says something like
Let $A$ be a square $n \times n$ matrix. Then the following are equivalent:
What does this mean, it simply means that if you want to check if any of these conditions is true or false, you can simply pick whichever other condition from the list and check it instead..
Your question is: Can instead of third or fourth condition, check the second? That's exactly what the Theorem says: YES.
One way to think about a linearly independent list of vectors is:
A list of vectors is linearly independent if and only if removing any vector from the list will result in a list whose span is strictly smaller than that of the original list.
Intuitively, the list is minimal for its span: remove any vector, you get a strictly smaller span. Intuitively, the list doesn't have any (linear redundancies).
Another, more intrinsic way of thinking about linearly independent lists is:
A list of vectors is linearly independent if and only if no vector in the list is a linear combination of the other vectors in the list.
One way to think about a spanning set for a vector space is:
A list of vectors in $V$ is a spanning set if every vector of $V$ is in the span of the list.
Intuitively, the list is "sufficient" to get you all vectors in $V$ (via linear combinations).
Note that "linearly independent" is intrinsic: it depends on the vectors (and the vector space operations), and only on them. Whereas "spanning set" is extrinsic: whether a set of vectors spans depends on which vector space you are working on. (E.g., $\{1,x,x^2\}$ is a spanning set for the vector space of real polynomials of degree at most $2$, but not for the vector space of all real polynomials.)
What the Lemma says is that spanning sets have to at least as large as linearly independent sets.
This is not trivial, and in fact turns on the fact that your scalars come from a field. To see that this assertion is not trivial, imagine that instead of a vector space where you can multiply by any element of the field, we will only take linear combinations with integer coefficients, and consider the "vector space" (in fact, it's called a module, or a $\mathbb{Z}$-module) of all integers. Here, the list consisting of $2$ and $3$ is minimal: you can get any integer with an (integral) linear combination of $2$ and $3$; but if you drop either of them, you can't get them all. You will get either just multiples of $2$, or just multiples of $3$.
On the other hand, the set consisting only of $1$ is a spanning set: every integer is an integer linear combination of $1$. So in this situation, we have a "linearly independent" set (a minimal set with respect to linear combinations) that has more elements than a spanning set.
(Caveat: There are multiple ways of defining "linearly independent", which are equivalent in a vector space; in this setting, they wouldn't be. For example, the definition that says that a list of vectors $v_1,\ldots,v_k$ is linearly independent if and only if whenever we have a linear combination equal to $0$, $\alpha_1 v_1+\cdots + \alpha_k v_k = \mathbf{0}$, all scalars must be zero: $\alpha_1=\cdots=\alpha_k=0$. Under this definition, the list I gave would not be "integrally linearly independent" because we can get $0$ as $3(2) -2(3) = 0$.)
In a vector space, the most basic relationship between linearly independent sets and spanning sets is that of the Lemma. In fact, the lemma can be refined to say that every linearly independent set can be extended to a set that is both linearly independent and spans; and every spanning set contains a spanning set that is linearly independent. From this you will show that any two linearly independent spanning sets for the same vector space have the same number of elements. That number is called the "dimension" of the vector space, and it is an invariant of fundamental importance in Linear Algebra.
Best Answer
The phenomenon can be phrased in terms of free objects. It might have been hard to see how this phenomenon can be phrased categorically due to there being two categories at play here (namely, the category of vector spaces, and the category of sets).
More precisely, let $\def\Vect{\mathbf{Vect}}\Vect$ be the category of vector spaces and linear transformations (over a fixed base field), and $\def\Set{\mathbf{Set}}\Set$ the category of sets and functions, and take $U:\Vect\to\Set$ to be the forgetful functor that sends a vector space to its underlying set. This functor admits a left adjoint $F:\Set\to\Vect$ which sends a set $S$ to the "free vector space" generated by the set; i.e., the vector space obtained from taking the elements of $S$ to be a basis. The universal property of these free objects is simply:
Now, let $S\subseteq V$ be a subset of your vector space. Then, more properly, this means you are taking an inclusion $S\hookrightarrow U(V)$ into the underlying set of $V$. We can use the left adjoint $F$ to characterise "linearly independent set" and "spanning set:"
If you look carefully, these statements say the exact same thing as in the usual linear-algebraic definitions of these terms.
In this language, the duality is much more apparent: monomorphisms and epimorphisms are formally dual in any category (in the sense that a morphism in $\mathcal C$ is a monomorphism iff its corresponding morphism in $\mathcal C^{\mathrm{op}}$ is an epimorphism).
If you like, you can also try to prove your observations in this language.
For instance, let $S'\subseteq S\subseteq U(V)$. Denote the inclusion of subsets by $i:S'\hookrightarrow S$. Then, the induced map $F(S')\to V$ is the composite $F(S')\xrightarrow{Fi}F(S)\to V$.
Edit: The second part is harder, because there is a nuance: if $T$ maps multiple vectors in $S$ to the same vector in $T(S)$, then you have to deem $T(S)$ linearly dependent (so in some sense, you are implicitly viewing $T(S)$ as a list, or a multiset, rather than a mere set. (This was pointed out by @Thinknot; I made the same oversight.) While one could work to make your observation categorically precise, instead there is probably a more satisfactory "duality" that looks similar:
To see (categorically) why these are true, note we have this commutative square $\require{AMScd}$ \begin{CD} F(S) @>>> V \\ @VFTVV @VVTV \\ F(TS) @>>> W \end{CD} To see (1): if $T$ is injective, then so is its restriction $S\to T(S)$, which means that both vertical arrows in the above square are monic. If $T(S)$ is linearly independent, then this means $F(TS)\to W$ is monic as well, meaning the composite $F(S)\to V\to W$ is monic, and thus that $F(S)\to V$ is monic. In other words, $S$ is linearly independent in $V$.
To see (2): if $T$ is surjective, and $S$ is a spanning set, then $F(S)\to V\to W$ is a composite of epis, and is thus epi. This makes $F(S)\to F(TS)\to W$ an epi, implying $F(TS)\to W$ is an epi. Notice that these arguments seem more like formal duals of each other.