The following intuitive explanation can be given.
Each row of the matrix $A$ of a linear map $T:V\to W$ describes one coordinate, with respect to the chosen basis of$~W$, of the images $T(v)$ of vectors in$~V$. (This is a linear form on $V$, hence an element of $V^*$, but there is no need to stress that point.) If one row is a linear combination of other rows, then this information allows reconstructing the corresponding coordinate from those other coordinates, given only the knowledge that $T(v)$ belongs to the image subspace $\def\Im{\operatorname{Im}}\Im(T)\subseteq W$. Therefore if one selects a maximal independent subset of $r$ rows of $A$, the selection defines a subset of $r$ coordinates, such that for $w\in\Im(T)$ all other coordinates of$~w$ can be recovered from those coordinates of$~w$ (by forming appropriate linear combinations).
It should be fairly intuitive that for a subspace of dimension $d$, it requires knowing $d$ (properly chosen) coordinates of a vector of the subspace to know exactly which vector it is. Formally this can be shown as follows. Our $r$ coordinates define a projection $p:\def\R{\Bbb R}W\to\R^r$, and our reconstruction of coordinates defines a linear map $s:\R^r\to W$ (keeping the given coordinates and reconstructing the remaining ones) such that $s(p(w))=w$ for all $w\in\Im(T)$ (we don't care what happens to elements outside $\Im(T)$). Thus the restriction of $p$ to $\Im(T)$ is certainly injective, and if we can show it to be surjective then $r=\dim(\Im(T))$, which is $\operatorname{rk}(T)$, will follow. But if it were not surjective, then the would be at least one nontrivial equation satisfied by all elements of $p(\Im(T))\subset\R^r$, a relation contradicting the independence of our set of $r$ chosen coordinates.
Let's go back to definitions. Range is a set-theoretic concept. Given a set map $f:X \to Y$, we denote the image or range as the set of values in $Y$, i.e. $\{y\in Y|y=f(x) \text{ for some }x\in X\}$.
Your other concepts live in the world of linear algebra. Let me define things abstractly, rather than by matrices. Given a linear transformation $T:V\to W$, we define the nullspace to the set of vectors $v\in V$ such that $T(v)=0$, i.e. $T^-(0)$. It turns out both the range and nullspace of a transformation $T$ happen to be subspaces in their appropriate domains. Since dimension is a well defined concept (over fields), we can ask for the dimensions of these two subspaces. The dimension of the range is defined to be the rank, while the dimension of the nullspace is defined to be the nullity.
How does all this relate to matrices? Recall the data of a matrix of a transformation $T$ is that the columns of the matrix denote where a basis of $V$ lands after applying $T$. Extending the transformation linearly (i.e. You know what $T(e_1),...T(e_n)$ is, so $T(a_1e_1+...+a_ne_n):=a_1T(e_1)+...+a_nT(e_n)$), uniquely(linear independence of e_i!) extends (e_i spans V!) $T$ to the entirety of $V$.
Knowing this, we can start translating all our definitions from the abstract into statements about matrices. For example, we see the range of a matrix is the Span of the columns. The rank of a matrix would then the the number of linearly independent columns. For example, the 2x2 matrix consisting of all 1's will have rank 1 despite being a transformation two 2-dimensional spaces.
How would we define nullspace of a matrix? Denoting the matrix by $M$, it would be the subspace of vectors $v\in V$ such that $Mv=0$. Concretely, this means for you to solve a system of equations, and is what linear algebra was built to do. Now to define nullity of a matrix, we can use the rank-nullity theorem which tells us $\dim(V)=rk(T)+nul(T)$, so we can define nullity of the matrix as $\dim(V)-rk(T)$.
Some conceptual mistakes I saw in your post: you're confusing nullity with nullspace. The former is a natural number, while the latter is a subspace of V, NOT in general a vector (unless the kernel is just 0). The phrase the rank of the kernel makes no sense; it only makes sense to talk about the rank of a transformation. Similar, the nullity of the rank also makes no sense.
Best Answer
Let $T: \mathbb R^5 \to \mathbb R^7$ be defined by $T(x)=Bx.$
Then $ker(B)=ker(T)= span(v)$, hence $\dim ker(B)=1.$
From the nullity - rank - theorem we derive
$$5= \dim ker(T)+ rank(T)=dim ker(B)+ rank(B)=1+rank(B).$$
Thus
$$rank(B)=4.$$