The affine function is
$T(x) = B - x_1 A_1 - \cdots - x_n A_n $.
The solution set to your LMI can be described as
\begin{equation}
\{ x \mid T(x) \succeq 0 \} = T^{-1}(S^m_+),
\end{equation}
where $S^m_+$ is the positive semidefinite cone in $\mathbb R^{m\times m}$.
Further details:
If we view $A_1,\ldots,A_n$ and $B$ as column vectors in $\mathbb R^{m^2}$, then
\begin{equation}
T(x) = \underset{\substack{\Bigg \uparrow \\m^2 \times 1}}{B} -
\underset{\substack{\Bigg \uparrow \\ m^2 \times n}}{A}
\underset{\substack{\uparrow \\ n \times 1}}{x}
\end{equation}
where
\begin{equation}
A = \begin{bmatrix} A_1 & A_2 & \cdots & A_n \end{bmatrix}.
\end{equation}
In this equation, the $A$ is multiplied by $x$ using ordinary matrix multiplication.
I will use the usual inequalities, it should be clear what they mean. So let us look at your first question. You want to show that $x \in C$ iff $\exists y\geq 0:\mathbf{1}^Ty\leq1 \text{ and }x=v_0+By$. Let us take a $x \in C$. By definition there exists positive $\theta_i's$ that sum to unity, st. $x=\sum_{i=0}^k\theta_iv_i$. Notice that since $\sum_{i=0}^k \theta_i=1,$ we have that $\theta_0=1-\sum_{i=1}^k \theta_i$. With this in mind let us expand our expression for $x$:
$$x=\sum_{i=0}^k\theta_iv_i=\theta_0v_0+\theta_1v_1+\dots+\theta_kv_k=(1-\sum_{i=1}^k \theta_i)v_0+\theta_1v_1+\dots+\theta_kv_k$$
$$= v_0-\theta_1v_0-\theta_2v_0-\dots-\theta_kv_0+\theta_1v_1+\dots+\theta_kv_k$$
$$=v_0+\theta_1(v_1-v_0)+\theta_2(v_2-v_0)+\dots+\theta_k(v_k-v_0) $$
$$=v_0+B \begin{bmatrix}
\theta_1,&\cdots, &\theta_k
\end{bmatrix}^T$$
Notice that $\begin{bmatrix}
\theta_1,&\cdots, &\theta_k
\end{bmatrix}^T$ has non-negative entries since each $\theta_j$ was non-negative. and since the sum of all $\theta_j$ sum to unity, the lack of $\theta_0$ must mean that
$$
\mathbf{1}^T\begin{bmatrix}
\theta_1,&\cdots, &\theta_k
\end{bmatrix}^T \leq 1.
$$
The proof of the "if" statement can be done similarly, backwards.
For your second question you ask whether or not the columns of $B$ aren't clearly linearly dependant. Counter-examples to this are easy and you should find them yourself, but the independence of the vectors $\{v_1-v_0,v_2-v_0,\dots,v_k-v_0\}$ comes from requiring affine independence of $\{v_0,v_1,\dots,v_k\}$. You write this yourself before defining the convex hull of a set of vectors.
For your last question we have that $B \in \mathbb{R}^{n\times k}$ has rank $k$ by assumption. Notice $k\leq n$ since we can't have $k>n$ and still have them to be linearly independant. From your first course in Linear Algebra, you know that it is possible to reduce the matrix $B$ to be in reduced row echelon form, where the last $n-k$ rows are zero-rows since all your columns are linearly independent. Each time you perform a row-operation on a matrix, you actually do a right-multiplication with a $n \times n$-elementary matrix. Each elementary matrix is invertible. The product of all your elementary matrices - that correspond to all your row-operations, is your matrix $A$. The following notes gives a good example of using elementary matrices to construct such a matrix: https://people.math.carleton.ca/~kcheung/math/notes/MATH1107/wk05/05_elementary_matrices_example.html
Best Answer
Since $B$ has full rank, $A$ can be obtained by doing Gaussian elimination, similar to finding the inverse of a square matrix of full rank.
You could also expand $B$ to a full-rank matrix $C \in \mathbb{R}^{n \times n}$ by adding complementary basis vectors to the columns of $B$. Define $A = C^{-1}$. Then $A$ fulfills the demanded property.