If $\{v_1, v_2, v_3\}$ is a basis for $\mathbb{R}^3$, we can write any $v \in \mathbb{R}^3$ as a linear combination of $v_1, v_2,$ and $v_3$ in a unique way; that is $v = x_1v_2 + x_2v_2+x_3v_3$ where $x_1, x_2, x_3 \in \mathbb{R}$. While we know that $x_1, x_2, x_3$ are unique, we don't have a way of finding them without doing some explicit calculations.
If $\{w_1, w_2, w_3\}$ is an orthonormal basis for $\mathbb{R}^3$, we can write any $v \in \mathbb{R}^3$ as $$v = (v\cdot w_1)w_1 + (v\cdot w_2)w_2 + (v\cdot w_3)w_3.$$ In this case, we have an explicit formula for the unique coefficients in the linear combination.
Furthermore, the above formula is very useful when dealing with projections onto subspaces.
Added Later: Note, if you have an orthogonal basis, you can divide each vector by its length and the basis becomes orthonormal. If you have a basis, and you want to turn it into an orthonormal basis, you need to use the Gram-Schmidt process (which follows from the above formula).
By the way, none of this is restricted to $\mathbb{R}^3$, it works for any $\mathbb{R}^n$, you just need to have $n$ vectors in a basis. More generally still, it applies to any inner product space.
The definition of linear independence says you can't make 0 out of a linear combination. It says nothing about not being able to make any other vector out of linear combinations.
(1,0) and (0,1) are independent since you cannot write (0,0) = c(1,0) + d(0,1) without c=d=0. But you can write every other vector as a nontrivial linear combination of these. (2,3) = 2(1,0)+3(0,1) for example. Spend some time making sense of the definitions with some concrete examples like this one and it will make sense eventually.
If you call your orthogonal set $\{v_1, v_2, \dots, v_n\}$, you can trivially write any vector in your set as a linear combination (take all coefficients $0$ except the coefficient of $v_k$ which is $1$).
$v_k = 0\cdot v_1+0\cdot v_2+\dots+0\cdot v_{k-1}+1\cdot v_k+0\cdot v_{k+1}+\dots+0\cdot v_n$
This is true of any set, whether it is orthogonal or not.
Moreover, any vector in the span of $\{v_1, v_2, \dots, v_n\}$ can be written as a linear combination of these vectors. This is again true of any set, whether orthogonal or not.
Best Answer
No. The set $\beta=\{(1,0),(1,1)\}$ forms a basis for $\Bbb R^2$ but is not an orthogonal basis. This is why we have Gram-Schmidt!
More general, the set $\beta=\{e_1,e_2,\dotsc,e_{n-1},e_1+\dotsb+e_n\}$ forms a non-orthogonal basis for $\Bbb R^n$.
To acknowledge the conversation in the comments, it is true that orthogonality of a set of vectors implies linear independence. Indeed, suppose $\{v_1,\dotsc,v_k\}$ is an orthogonal set of nonzero vectors and $$ \lambda_1 v_1+\dotsb+\lambda_k v_k=\mathbf 0\tag{1} $$ Then applying $\langle-,v_j\rangle$ to (1) gives $\lambda_j\langle v_j,v_j\rangle=0$ so that $\lambda_j=0$ for $1\leq j\leq k$.
The examples provided in the first part of this answer show that the converse to this statement is not true.