Your first two vectors are the same, and you are missing one of the faces perpendicular to the $z$-axis:
$$
\vec x = [0,0,0] + t1 [1,0,0] + t2 [0,1,0] , 0 \le t1,t2 \le 1
$$
Otherwise, it looks good.
To make sure you have them all and didn't repeat, it helps to list each one in a descriptive way; e.g.
$$
\begin{array}{ccc}
\text{perpendicular axis}&\text{which end}&\text{parameterization}\\
\hline
x&0&[0,0,0] + t1 [0,1,0] + t2 [0,0,1]\\
x&1&[1,0,0] + t1 [0,1,0] + t2 [0,0,1]\\
y&0&[0,0,0] + t1 [0,0,1] + t2 [1,0,0]\\
y&1&[0,1,0] + t1 [0,0,1] + t2 [1,0,0]\\
z&0&[0,0,0] + t1 [1,0,0] + t2 [0,1,0]\\
z&1&[0,0,1] + t1 [1,0,0] + t2 [0,1,0]\\
\end{array}
$$
where $0 \le t1,t2 \le 1$.
That is a really convoluted way of describing a steady state in my opinion.
The set of potential states a system can be in is called $S$. For example, $S$ could be "how many animals are there in your socks".
They seem to be describing a system that proceeds in discrete steps (discrete time markov). So a step could be going from one day to the next.
$\pi_k(x)$ is the probability of being in state $x$ on step $k$. So there might be a 10% probability that you have 4 kittens in your socks on Tuesday: $\pi_{\text{Tuesday}}(4) = .1$.
$P(x,y)$ is the probability that you proceed from state $x$ to state $y$. So if you have 3 kittens in your socks on one day, there is a 14% chance that you'll have 5 the next day. $P(3,5) = .14$.
Every step has a probability associated with being in a certain state. If your socks can hold at most $5$ kittens, then $\pi(0) + \pi(1) + \pi(2) + \pi(3) + \pi(4) + \pi(5) = 1$. That's just basic probability, the sum of all possibilities is 100%.
Knowing the transition probabilities $P$, and the probability of states of a certain step $\pi_k()$, then you can calculate the probability of states in the next step, $\pi_{k+1}()$. Specifically, $\pi_{k+1}(y) = \sum_{x \in S} \pi_{k}(x)P(x,y)$. "The probability of having 2 kittens in your sockets is the probability of having 0 kittens times the probability of transitioning from 0 to 2 plus the probability of having 1 times the probability of transitioning from 1 to 2 plus ...".
When $\forall x ~~\pi_{k+1}(x) = \pi_k(x)$, then that $\pi$ is called a steady state. In that state, the transitions don't change the probabilty of each state from step to step.
Example:
You can have at most 2 kittens in your socks.
- If you have zero kittens in your sockets, then the probability that you have 0 the next day is 10%, 1 is 10%, 2 is 80%
- If you have one kitten in your sockets, then the probability that you have 0 the next day is 10%, 1 is 20%, 2 is 70%
- If you have two kittens in your sockets, then the probability that you have 0 the next day is 30%, 1 is 30%, 2 is 40%
Then a steady state is :
$$\pi(0) = \frac{27}{128},\quad \pi(1) = \frac{15}{64},\quad \pi(2) = \frac{71}{128}$$
As you can check, using the given formula.
Best Answer
"$(1,0,0),(0,1,0),(0,0,1)$ are all eigenvectors corresponding to eigenvalue 1"
What you have found is not all eigenvectors, but a basis for the vector space of the eigenvectors.
If you want all eigenvectors, you want the space spanned by these:
$$\{t(1,0,0)+u(0,1,0)+v(0,0,1):t,u,v\in\mathbf{R}\}$$
that is,
$$\{(t,u,v):t,u,v\in\mathbf{R}\}$$
that is,
$$\mathbf{R}^3$$
Notice that for any $\mathbf{x}\in\mathbf{R}^3$ you have $\mathbf{x}.I=1\mathbf{x}$, so these all are, in fact, (left) eigenvectors with eigenvalue 1.
Since you only want distributions, you actually have
$$\{(t,u,v):t+u+v=1\}$$
That's infinitely many stationary distributions. Indeed, if I have probability $t$ of being in state 1, and probability $u$ of being in state 2, then after 1 step, I still (since I never change state) have probability $t$ of being in state 1, and probability $u$ of being in state 2, so all of these are stationary distributions - applying a step of the Markov process doesn't change the probability distribution of the state I might be in.