If you mean by formula some expression that is a continuous function of its arguments, then the answer is that this is impossible, for similar reasons to what I explained in this answer.
Suppose your $n-1$ vectors span a space of dimension $d<n-1$, then the space $S$ of possible vectors orthogonal to them has dimension $n-d>1$. Now if you take any subspace $L$ of dimension$~1$ in $S$, you can easily make that line to be the only set of possibilities by making a very small adjustment to your vectors (add small multiples of vectors in $S$ but orthogonal to $L$ to some of your vectors). By continuity, the vector of $S$ that your formula chooses must be arbitrarily close to any such line $L$, and the zero vector is the only one that satisfies this requirement.
The key point to understand here is that you really are dealing with two $\mathbb R^2$ here, although it's not that obvious when using the standard basis.
The first $\mathbb R^2$ is your vector space. Let's write this vector space and everything in it in blue. This first $\color{blue}{\mathbb R^2}$ is equipped with a vector space structure and additionally with the dot product $\color{blue}{\mathbf x\cdot\mathbf y = x_1y_1+x_2y_2}$.
Now as soon as you choose a basis $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}\subset\color{blue}{\mathbb R^2}$, you can write every vector $\color{blue}{\mathbf x}\in\color{blue}{\mathbb R^2}$ in an unique way as $\color{blue}{\mathbf x}=\color{red}{\xi_1}\color{blue}{\mathbf b_1}+\color{red}{\xi_2}\color{blue}{\mathbf b_2}$. Note that $\color{red}{\xi_1}$ and $\color{red}{\xi_2}$ are not the components of the vector in $\color{blue}{\mathbb R^2}$, but are base dependent.
But of course you need always two of them, and when doing vector addition and multiplication with scalar, you'll find they behave exactly like the components of a vector should behave. Therefore it does make sense to consider them as part of a $\color{red}{\mathbb R^2}$ which however is a different $\mathbb R^2$ than the original $\color{blue}{\mathbb R^2}$ we started with. In particular, the coordinate $\color{red}{\mathbb R^2}$ is not pre-equipped with an inner product.
The basis then defines a linear map $\beta$ from the coordinate $\color{red}{\mathbb R^2}$ to the original $\color{blue}{\mathbb R^2}$ given by
$$\beta(\color{red}{\boldsymbol\xi})=\color{red}{\xi_1}\color{blue}{\mathbf b_1} + \color{red}{\xi_2}\color{blue}{\mathbf b_2}.$$
Now remember that I said that the coordinate $\color{red}{\mathbb R^2}$ is not pre-equipped with an inner product. That doesn't mean we cannot give it one. but we want to do it in a way that the product is preserved by the map $\beta$, that is, you want to have
$$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \beta(\color{red}{\boldsymbol\xi})\color{blue}{\cdot}\beta(\color{red}{\boldsymbol\eta})$$
where $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle}$ denotes the inner product in the coordinate $\color{red}{\mathbb R^2}$. By inserting the explicit formula of $\beta$, one easily sees that
$$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \sum_{j,k=1}^2 (\color{blue}{\mathbf b_j\cdot\mathbf b_k})\color{red}{\xi_j\eta_k}.$$
Now quite obviously, if $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}$ is not an orthogonal basis, then $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle}\ne\color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}$, and indeed, the inner product on the coordinate $\color{red}{\mathbb R^2}$ explicitly depends on the chosen basis $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}$. But that is not really surprising, because the vector in $\color{blue}{\mathbb R^2}$ those coordinates describe does depend on the basis chosen, and of course different vectors in general have different inner products.
Note that by definition of the inner product, with $\beta(\color{red}{\boldsymbol\xi})=\color{blue}{\mathbf x}$ and $\beta(\color{red}{\boldsymbol\eta})=\color{blue}{\mathbf y}$ it is of course still true that
$$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \color{blue}{\textbf x\cdot\textbf y} = \color{blue}{x_1y_1} + \color{blue}{x_2y_2}.$$
But in general, $\color{blue}{x_1y_1} + \color{blue}{x_2y_2} \ne \color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}$.
However if you chose the standard basis $\color{blue}{\mathbf b_k}{\mathbf e_k}$ then you obviously have $\color{red}{\xi_k}=\color{blue}{x_k}$ and $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \color{blue}{x_1y_1+x_2y_2} = \color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}.$ This is why it is so easy not to see the fact that you are really working with two different $\mathbb R^2$ when using the standard basis.
Best Answer
Assertion 1 is true since each vector's orthogonal projection onto the space spanned by the others is $0$. Hence assuming linear dependence of a $v_k$ to the other vectors in $S$ results in the contradicting conclusion that $v_k=0$.
Assertion 2 is false. Consider for $n\geq 3$, an $S$ where $v_k$ has all entries $1$s except for the $k$th component which is $a$. Then the dot product of any two is $2a+n-2$. Setting this to $0$ and solving gives $a=1-\frac{n}{2}$. The resulting vectors form an orthogonal basis and none have any component $0$. For $n=2$, we can take any vector $\langle a,b\rangle$ and $\langle b,-a\rangle$ and choose $a,b\neq 0$. For $n=1$ all choices of $v_1$ are counterexamples.
Assertion 3 is false since in the example just given to disprove assertion 2, the vectors are not unit length.
Assertion 4 is true since we proved assertion 1 and there are as many vectors as the dimensionality of $\mathbb{R}^n$.