You are interested in finding a $K$ such that
$$
A - B\,K = V\,\Lambda\,V^{-1}, \tag{1}
$$
with $A,\Lambda,V \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$, $K \in \mathbb{R}^{m \times n}$ and $\Lambda$ a diagonal matrix (the right hand side is the eigendecomposition of the desired closedloop matrix). I will assume that the pair $(A,B)$ is controllable so $\Lambda$ can be chosen freely, furthermore the rank of $B$ is assumed to be equal to $m$. If this last assumption is not the case then it won't change the possible $V$ that could be chosen when using a new $B$ matrix instead, which has the same span as the original $B$ but does satisfy this condition.
When looking at the constrains and degrees of freedom it is possible to get an idea of how many columns of $V$ (the eigenvectors) can be chosen freely. Namely the number of degrees of freedom are equal to $n$ times $m$. Choosing the eigenvalues ($\Lambda$) adds $n$ constraints. Choosing a column of $V$ adds $n-1$ constraints, namely the length of the eigenvector does not matter (as long as it is none zero). So from this it can be concluded that the largest number of eigenvectors $p$ that can be chosen freely should equal
$$
p \leq \frac{n (m - 1)}{n - 1}. \tag{2}
$$
When $m = n$ it is trivial to show that all nonsingular $V$ are allowed. The matrix $K$ can then be found using
$$
K = B^{-1} (A - V\,\Lambda\,V^{-1}). \tag{3}
$$
For $m > n$ then $B$ can be reduced to a $m=n$, however it is also possible to use the right inverse, which will minimize the 2-norm of $u$.
Now comes the more difficult and interesting case, namely when $m < n$ and thus not all columns of $V$ can be chosen freely. However it can be noted that if $m = 1$ then all degrees of freedom of $K$ need to be used to satisfy the constraints for $\Lambda$, so then for a given $\Lambda$ the matrix $V$ will be fixed. By multiplying equation $(1)$ on the right hand side by $V$ and define a new temporary matrix $\Omega = K\,V$ then the following linear matrix equation can be obtained
$$
A\,V - B\,\Omega = V\,\Lambda. \tag{4}
$$
Equation $(4)$ can be reduced to a linear system of equations. In order to do this equation $(4)$ has to be reshaped from a matrix of $n \times n$ into a vector of $n^2 \times 1$. One way of achieving this would be
$$
\begin{bmatrix}
A\,V_{\bullet,1} - B\,\Omega_{\bullet,1} - \Lambda_{1,1} V_{\bullet,1} \\
A\,V_{\bullet,2} - B\,\Omega_{\bullet,2} - \Lambda_{2,2} V_{\bullet,2} \\
\vdots \\
A\,V_{\bullet,n} - B\,\Omega_{\bullet,n} - \Lambda_{n,n} V_{\bullet,n}
\end{bmatrix} = 0, \tag{5}
$$
where $X_{\bullet,i}$ denotes the $i$th column of matrix $X$. Next all constraints have to be defined. In order to constrain the length of the of the columns of $V$ which are not chosen freely one can set for example the first element of each of these columns to one. However it can be possible that later this will yield a singular systems of equations, if so you might have to try to set a different element of a column to one. For now I do not have a better solution than trial and error, however I suspect that this should not happen that often. The $p$ chosen columns of $V$ and the elements equal to one of the remaining columns can be substituted into equation $(5)$. This can then be rewritten as linear system of equations, since all unknown parameters only occur linearly. It can be noted that since $V_{\bullet,i}$ and $\Omega_{\bullet,i}$ only occur in the $i$th set of $n$ rows from equation $(5)$, therefore it can also be rewritten as $n$ linear systems of equations.
This formulated system of linear equations however will always be singular when an entire column of $V$ is specified by the user. This is because the only remaining unknown parameters related to that column of $V$ will be pre-multiplied by the $B$ matrix. So in order to be able to solve for the unknown parameters $\Omega_{\bullet,i}$ then $(A-\Lambda_{i,i} I)V_{\bullet,i}$ would have to lie within the span of $B$. Or if $\Lambda_{i,i}$ is not an eigenvalue of $A$, then $V_{\bullet,i}$ would have to lie within the span of $(A-\Lambda_{i,i} I)^{-1} B$. Therefore $\Lambda_{i,i}$ and $V_{\bullet,i}$ can never be chosen entirely freely.
As stated in the previous paragraph the allowed choices for columns of $V$ are limited to a $m$ dimensional span. Choosing a vector from this span will only add $m-1$ constraints per chosen column of $V$ instead of $n-1$, so equation $(2)$ will become $p=n$. This would imply that all columns of $V$ can then be chosen, as long as they lie within a certain span. According to equation $(1)$ $V$ has to be none singular, so all columns of $V$ have to be linearly independent of each other. This implies that the eigenvalues of $\Lambda$ can have at most an multiplicity of $m$, since it is impossible to choose more then $m$ independent columns from a span of dimension $m$. If a higher multiplicity is desired then the structure of $\Lambda$ would have to be changed to the Jordan form, but the number of associated Jordan blocks to the same eigenvalue should still be at most be $m$. The constraint on the multiplicity of the eigenvalues of $\Lambda$ also holds for the place() command in MATLAB, so I suspect that it uses a similar method of solving this problem (at least based on eigenvalue decomposition instead of the more general Jordan decomposition).
Once all desired constraints are applied then each $n$ dimensional linear system of equations can be solved. If $V_{\bullet,i}$ is only constrained in its length, then there will be more unknowns then equations and again the right inverse could be used to solve for $\Omega_{\bullet,i}$ and the rest of $V_{\bullet,i}$. Once all linear systems of equations are solved, then all solution can be rewritten to form $V$ and $\Omega$. The controller gain can be obtained through $K = \Omega\,V^{-1}$.
As you have said, the uncontrollable subspace forms a complement to the controllable subpace. In other words, $\Bbb R^n = R \oplus I$. That is, we can uniquely decompose every vector $x \in \Bbb R^n$ into the form $x = x_R + x_I$, with the "controllable component" $x_R \in R$ and "uncontrollable component" $x_I \in I$. A state $x \in \Bbb R^n$ is controllable if and only if its uncontrollable component is zero.
It is useful to have such a decomposition because the nature of the "state-update" matrix $A$ is completely determined by its behavior over these separate subspaces, since for any $x = x_R + x_I$, we have
$$
Ax = A(x_R + x_I) = Ax_R + Ax_I.
$$
Here is a continuous-time example. Suppose that we have
$$
A = \pmatrix{a_1 & 0\\0 & a_2},\quad B = \pmatrix{1\\0}, \quad C = \pmatrix{1&1}, \quad D = 0.
$$
It is easy to verify that our controllable subspace of $\Bbb R^2$ is the $x_1$-axis, i.e. the span of $(1,0)$. Any other one-dimensional subspace can be selected as the uncontrollable subspace, but it is convenient to take $I$ to be the span of $(0,1)$, since this space happens to be invariant under $A$ (note: such a complement is not always available).
Suppose that the initial state is given by $x(0) = (x_1,x_2)$. It is easy to see that for input $u(t)$, the state and output will be
$$
x(t) = \left(x_1 + e^{a_1t}\int_0^t e^{-a_1t}u(t)\,dt, \quad x_2 e^{a_2t}\right),
\\
y(t) = \left[x_1 + e^{a_1t}\int_0^t e^{-a_1t}u(t)\,dt\right] + x_2 e^{a_2t}.
$$
The first component of the sum, which corresponds to the controllable component of $x(t)$, can be stabilized with a suitable input. The second component, which corresponds to the uncontrollable component of $x(t)$, cannot be stabilized in this way. We could also say that the component $x_2 e^{a_2}t$ is itself an autonomous trajectory of the system: it transpires independently of the input.
We see from the above that the output is only stabilizable (i.e. can be "steered" so that $y(t) \to 0$) if $e^{a_2t} \to 0$.
Correspondingly, we see that $a_2$ is an eigenvalue of $A$ whose eigenvector $(0,1)$ is an element of the uncontrollable subspace $I$.
Suppose that we keep $v_1 = (1,0)$ as the basis for $R$, but instead take $v_2 = (1,1)$ as a basis for $I$. We find that
$$
Av_1 = a_1 v_1 + 0v_2, \\
A v_2 = \pmatrix{a_1\\a_2} = (a_1 - a_2)v_1 + a_2 v_2.
$$
So, the matrix of $A$ relative to the basis $\{v_1,v_2\}$ is
$$
\bar A = \pmatrix{a_1 & a_1 - a_2\\0 & a_2}.
$$
We indeed find that the eigenvalue $a_2$ of $A$ is associated with our uncontrollable subspace $I$. It is tricky, however, to figure out exactly what "associated with $I$" really means here.
One way to make sense of it is this. If we define the projection map $P_I(x_R + x_I) = x_I$, then we could say that the eigenvalue $\lambda$ of $A$ is "associated with $I$" if it is an eigenvalue of the map $T:I \to I$ defined by $T(x) = P_I(Ax)$.
Best Answer
Since you only mentioned state feedback I assume that you are dealing with a system of the following form
$$ \dot{x} = A\,x + B\,u $$
with $x \in \mathbb{R}^n$, $u \in \mathbb{R}^m$ and you want to find a state feedback $u = -K\,x$ such that the dynamics of the controllable parts of the system can be chosen. The entire state $x$ is considered to be known and thus observability does not have to be taken into consideration.
By looking at the controllable and uncontrollable states it is possible to find a similarity transformation which decomposes these states, similar to the Kalman decomposition but without also considering the observability. In order to do this you need the controllability matrix $\mathcal{C}$ defined as
$$ \mathcal{C} = \begin{bmatrix} B & A\,B & \cdots & A^{n-1}B \end{bmatrix}. $$
A controllable decomposition can be found using the similarity transformation $T = \begin{bmatrix}T_1 & T_2\end{bmatrix}$, with the columns of $T_1$ having the same span as the controllability matrix and $T_2$ chosen such that $T$ is invertible. In the new coordinates, denoted by $x = T\,\hat{x}$, the dynamics are as follows
$$ \dot{\hat{x}} = \underbrace{T^{-1} A\,T}_{\hat{A}}\,\hat{x} + \underbrace{T^{-1} B}_{\hat{B}}\,u $$
such that
$$ \hat{A} = \begin{bmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{bmatrix}, \quad \hat{B} = \begin{bmatrix} B_1 \\ 0 \end{bmatrix} $$
where $A_{11}$ has the same size as the rank of $\mathcal{C}$ and the pair $(A_{11}, B_1)$ is controllable (their associated controllability matrix is full rank). So for this pair you can do pole placement, such that the poles of $A_{11}-B_1\,K_1$ can be chosen arbitrarily. Therefore, $u = -\begin{bmatrix}K_1 & 0\end{bmatrix} \hat{x}$ will place the controllable poles of $\hat{A}$ arbitrarily as well.
By using the definition of $\hat{x}$ it is possible to express this state feedback in terms of $x$ instead, namely $\hat{x} = T^{-1} x$. Or in other words your feedback gain can be found with $K = \begin{bmatrix}K_1 & 0\end{bmatrix}\,T^{-1}$.
For example given the system
$$ A = \begin{bmatrix} 4 & 0 & 0 \\ -4 & 0 & -2 \\ -2 & -2 & 0 \end{bmatrix}, \quad B = \begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix}, $$
which has the eigenvalues $2$, $-2$ and $4$. This pair has the following controllability matrix
$$ \mathcal{C} = \begin{bmatrix} 1 & 4 & 16 \\ 0 & -2 & -12 \\ -1 & -2 & -4 \end{bmatrix}, $$
with has rank 2. It can be shown that its first two columns are linearly independent of each other, so an option for the similarity transformation would be
$$ T = \begin{bmatrix} 1 & 2 & 0 \\ 0 & -1 & 0 \\ -1 & -1 & 1 \end{bmatrix}. $$
For the added last column it can be shown that it makes $T$ invertible (full rank). Performing the similarity transformation gives
$$ \hat{A} = \begin{bmatrix} 0 & -4 & -4 \\ 2 & 6 & 2 \\ 0 & 0 & -2 \end{bmatrix}, \quad \hat{B} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, $$
such that
$$ A_{11} = \begin{bmatrix} 0 & -4 \\ 2 & 6 \end{bmatrix}, \quad B_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}. $$
Finding a feedback gain for this pair, which places the poles at $-1$ and $-2$ gives $K_1 = \begin{bmatrix}9 & 24\end{bmatrix}$. So the actual state feedback gain can be calculated with
$$ K = \begin{bmatrix}K_1 & 0\end{bmatrix}\,T^{-1} = \begin{bmatrix}9 & -6 & 0\end{bmatrix}, $$
which also places the two controllable poles of $A$, which are initially located at $2$ and $4$, at $-1$ and $-2$ respectively.