Approximation for the Lambert $W$ function from $x=5$ to $x=105$

approximationgraphing-functionslambert-w

Recently, I learnt about the Lambert $W$ function which is the inverse of $f(x) = x\cdot e^{x}$ and have been trying to solve problems related to it on the internet. After solving each problem, however, I find myself going back to WolframAlpha to calculate the result which is rather inconvenient. So I was wondering if I could find an approximation for $W(x)$ which I can simply plug into my standard calculator which is almost always close to me.

Playing around with the graph of $W(x)$ seemed like the most obvious method to me and so I began to try out different functions to match its graph. After a while, I settled upon $f(x) = 1.006 \log _{3.96} (x+1)$ from $x=5$ to $x=105$. From the graph of $g(x) = W(x) – 1.006 \log _{3.96} (x+1), x=5 \: \text{to} \: x=105$ on WolframAlpha, $f(x)$ seems to be within $ \approx\pm 0.02$ of $W(x)$. Is this a good approximation? If not, how can I make it better? Furthermore, is there a way to find a good approximation algebraically rather than graphically?

Best Answer

Considering the bounds

$$\log (x)-\log (\log (x))+\frac12\frac{\log (\log (x))}{ \log (x)} < W(x)$$ $$W(x)< \log (x)-\log (\log (x))+\frac e{e-1 }\frac{ \log (\log (x))}{ \log (x)}$$ for a specific range, we can numerically minimize $$\Phi(k)=\int_a^b \Big[\log (x)-\log (\log (x))+k\frac{\log (\log (x))}{ \log (x)}-W(x)\Big]^2\,dx$$

For $a=5$ and $b=105$, this gives $k\sim 0.881076$ but the formula is a bit more complex than your.

Trying something of the same shape as your $$W(x) \sim a+b \log_e(x+c)$$ a nonlinear regression gives $(R^2>0.999999)$ $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & -0.201141 & 0.004453 & \{-0.209981,-0.192302\} \\ b & +0.774451 & 0.000974 & \{+0.772518,+0.776385\} \\ c & +2.288360 & 0.034073 & \{+2.220730,+2.356000\} \\ \end{array}$$ which leads to a maximum absolute error of $0.005$.

Congratulations for your idea !

Edit

Since we are using totally empirical models, let us try

$$W(x) \sim a+b \Big[\log_e(x^d+c)\Big]^f$$

$$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & 0.256813 & 0.001340 & \{0.254152,0.259474\} \\ b & 0.815983 & 0.001525 & \{0.812955,0.819012\} \\ c & 0.547202 & 0.001079 & \{0.545059,0.549345\} \\ d & 0.678026 & 0.000656 & \{0.676723,0.679329\} \\ f & 1.172580 & 0.000318 & \{1.171940,1.173210\} \\ \end{array}$$ which reduces the previous sum of squares by a factor close to $800,000$ (!!) and leads to a maximum absolute error equal to $5\times 10^{-6}$. Better, isn't it ?