Here is a way of avoiding Ado's theorem, at the expense of using the Poincare-Birkhoff-Witt Theorem. The PBW theorem has no finite dimensionality or characteristic hypotheses, so you may like this better. Note, however, that I will realize finite dimensional Lie algebras as endomrophisms of infinite dimensional vector spaces.
Let's define a concrete Lie algebra to be a vector space $V$, and a vector subspace $\mathfrak{h}$ of $\mathrm{End}(V)$ closed under commutator.
Theorem: Every Lie algebra is isomorphic to a concrete Lie algebra.
Proof: Let $\mathfrak{g}$ be a Lie algebra and $U$ its universal enveloping algebra. The Lie algebra $\mathfrak{g}$ acts on $U$ by left multiplication, so this gives a map $\mathfrak{g} \to U$ taking bracket to commutator. We must prove this map is injective.
Choose a basis ${ v_i }$ for $\mathfrak{g}$. Suppose that left multiplication by $\sum a_i v_i$ is $0$. Then $\left( \sum a_i v_i \right) \cdot 1 = \sum a_i v_i$ would be zero in $U$. But, by the PBW theorem, the $v_i$ are linearly independent in $U$, a contradiction. QED
As far as I know, this special case of PBW is as hard as the whole theorem.
I would answer your final question with "no".
Note first of all that only reductive Lie groups/algebras, and in a way only their (split) semisimple part, "have" a root system. This shows that the root system lies on a different structural level than the mere existence of a Lie group resp. Lie algebra structure: only a special subclass has them.
Now, the prototypical reductive Lie group resp. algebra is $GL_n(\Bbb C)$ resp. $\mathfrak{gl}_n(\Bbb C)$. If you work with them, you'll figure out very soon that computing matrix products $A\cdot B$ resp. commutators $[A, B]$ by hand each time is not the way you want to spend your hours. So one looks deeper into the structure of matrices, and one figures out that the diagonal matrices on the one hand, and specific unipotent (resp. nilpotent) matrices $I_n + E_{i,j}$ (resp. just $E_{i,j}$) on the other, are at the basis of everything: If one understands products resp. commutators of these, one understands the entire group resp. algebra.
Then, for (split) semisimple Lie groups/algebras, I personally think of root systems as a vast generalisation of that. Cf. my answer here. The analogue of diagonal matrices that you want to find in your group resp. algebra is a torus resp. Cartan subalgebra, the analogue of those elementary matrices are the root spaces etc. Then, as after a while you figure out that in the matrices you don't really need all $E_{i,j}$, but only the $E_{i,i+1}$, you will find that all the information of the root system is already in its so-called simple roots, etc.
Finally note that for non-split semisimple Lie algebras / algebraic groups, which exist over non-algebraically closed fields, the story is more complicated and in general, root systems alone do not help. Tits and Satake did a lot of work there. While over $\Bbb R$, the Cartan classification and properties of compact forms remedy this, over number fields ike $\Bbb Q$ the situation becomes much trickier.
Finally, to get back to the first point, even over $\Bbb C$ and $\Bbb R$, there is a huge class of Lie groups resp. algebras, namely the solvable ones, where the theory of root systems is quite useless.
Best Answer
Well Lie algebras naturally arise from the Lie bracket of vector fields and from taking the Lie algebra of a Lie group. If we look at a the Lie algebra of a matrix subgroup, then the Lie bracket is the commutator of matrices.