EDIT: I originally posted a partial proof, and the OP showed me how to complete it in a comment. I'm revising the post to give a complete proof.
Suppose on the contrary that $$\sum_{i=1}^{k}\frac{N_i}{2^{i}} > 1,$$ and let $E_n$ be the expected number of ways that a random binary string of length $n$ can be deciphered, where $n$ is longer than the longest codeword. Then $$E_n = \sum_{i=1}^k\frac{N_i}{2^{i}}E_{n-i},$$
since the probability that the the first $i$ bits of the string is a codeword is $\frac{N_i}{2^{i}},$ and the expected number of ways to decipher the rest of the string is $E_{n-i}.$
If for some $n$ we have $E_n > 1,$ then there is some string of length $n$ with more than one decipherment, contradiction.
At this point, I was struggling with the roots of the characteristic equation, but the OP showed me a much simpler way. The remainder of the proof is based on his insight.
Let $I = \{i| N_i > 0\}.$ Suppose $$n = \sum_{i \in I}{a_ii},$$ where the $a_i$ are nonnegative integers. Then $E_n>0,$ because the string $$\sum_{i\in I}w_i^{a_i}$$ is a string of length $n$ with a decipherment. Here each $w_i$ is a codeword of length $i,$ $w_i^{a_i}$ indicates the $a_i-$fold repetition of $w_i,$ and the sum means concatenation. (Of course, the only way $E_n$ can be 0 is if no $n$-bit string has a decipherment.)
Now let $r = \text{gcd}(I).$ Then there is a $N$ such that for every $n\ge N, nr$ can be written in the form $$nr = \sum_{i \in I}{a_ii}, \text{ with }a_i\ge 0,$$ so that for sufficiently large $n, E_{nr} > 0.$ Then for some $n,$ in the recurrence $$E_{nr} = \sum_{i \in I}\frac{N_i}{2^{i}}E_{nr-i}\tag1, $$ every expectation on the right-hand side is positive. Let $c$ be the minimum of these expectations. Then $$E_{nr} \ge c \sum_{i\in I}\frac{N_i}{2^{i}} = ct, \text{ where } t >1. $$
Now if we replace $n$ by $n+1$ in $(1)$, the minimum expectation on the right-hand side will be at least $c$, since $ct > c$. Thus, the same argument shows that $E_{nr+r} \ge tc.$ Continue in this manner through enough terms so that each expectation on the the right-hand side is $\ge tc.$ Then the next expectation in the sequence is $\ge t^2c,$ and we can find expectations $\ge t^hc$ for every positive integer $h$. Since $t>1,$ some $E_n>1,$ which completes the proof.
The number-theoretic fact about representation of large multiples of $r$ follows at once from Schur's theorem, the $r=1$ case. Schur gave his name to several theorems, so in googling it's better to search for "Frobenius coin problem."
With 3 words that have length 3 I have the codes: 110, 111, 112
Your try:
L C
1 0
2 10
3 110
3 111
3 112
If you are looking for a binary prefix-free code , then 112
does not make sense.
Before looking for such a code, with prescribed lengths, you might test if it's possible. A necessary and sufficient criterion is Kraft's inequality: $\sum 2^{-l_i} \le 1$.
In this case we have $\frac12 + \frac 14 + 3 \frac18=\frac98>1$
Hence, it's not possible to find a binary prefix-free code with those lengths.
BTW: I have no idea why you've you've computed $\sum 3^{-l_i} $ (I guess that's you meant). That would make sense if you wanted a ternary code. In that case, yes, the Kraft's inequality is verified, and you can build a ternary prefix-free code (your try is fine).
Best Answer
Call $2^{-l}$ the weight of a codeword of length $l$.
The fastest way to get a long codeword (one of small weight) is to divide the unit interval into two repeatedly. So weights $1/2, 1/4, 1/8, 1/16, 1/32, 1/32$ for example.
To get a codeword of weight $1/2^r$ you need at least another of length $r$ and then one of length $2^{r-1}$ (or smaller ones that add up to this) and so on. So you need at least $r+1$ codewords to have a length $r$ codeword.