For three-variate degree $D$ polynomial interpolation of three-component (color) vectors, you need
$$f(r, g, b) = \sum_{i=0}^D \sum_{j=0}^D \sum_{k=0}^D \, r^i \, g^j \, b^k \, \vec{C}_{ijk}$$
where $\vec{C}_{ijk}$ are three-component (color) vectors to be fitted to the known data. There are $(D + 1)^3$ of those. Note that usually the components are real, between 0 and 1, so that the actual valid range of inputs and outputs is just a matter of scaling, and the constant color vector components are within a sensible range (not huge in magnitude).
For cubic interpolation, $(3+1)^3 = 4^3 = 64$:
$$\begin{aligned}
f(r, g, b) = &\; \vec{C}_{000} + \vec{C}_{001} \, b + \vec{C}_{002} \, b^2 + \vec{C}_{003} \, b^3 \\
\; + &\; \vec{C}_{010} \, g + \vec{C}_{011} \, g \, b + \vec{C}_{012} \, g \, b^2 + \vec{C}_{013} \, g \, b^3 \\
\; + &\; \vec{C}_{020} \, g^2 + \vec{C}_{021} \, g^2 \, b + \vec{C}_{022} \, g^2 \, b^2 + \vec{C}_{023} \, g^2 \, b^3 \\
\; + &\; \vec{C}_{030} \, g^3 + \vec{C}_{031} \, g^3 \, b + \vec{C}_{032} \, g^3 \, b^2 + \vec{C}_{033} \, g^3 \, b^3 \\
\; + &\; \vec{C}_{100} \, r + \vec{C}_{101} \, r \, b + \vec{C}_{102} \, r \, b^2 + \vec{C}_{103} \, r \, b^3 \\
\; + &\; \vec{C}_{110} \, r \, g + \vec{C}_{111} \, r \, g \, b + \vec{C}_{112} \, r \, g \, b^2 + \vec{C}_{113} \, r \, g \, b^3 \\
\; \vdots \; &\; \\
\; + &\; \vec{C}_{330} \, r^3 \, g^3 + \vec{C}_{331} \, r^3 \, g^3 \, b + \vec{C}_{332} \, r^3 \, g^3 \, b^2 + \vec{C}_{333} \, r^3 \, g^3 \, b^3 \\
\end{aligned}$$
Note that this is capable of mapping blues to reds, and so on. If you have 64 color-to-color samples, you can use e.g SageMath to solve the huge mess of equations:
r, g, b = var('r', 'g', 'b')
R_000, R_001, ..., R_333 = var('R_000', 'R_001', ..., 'R_333')
G_000, G_001, ..., G_333 = var('G_000', 'G_001', ..., 'G_333')
B_000, B_001, ..., B_333 = var('B_000', 'B_001', ..., 'B_333')
R(r, g, b) = R_000 + R_001*b + R_002*b^2 + R_003*b^3 + ...
G(r, g, b) = G_000 + G_001*b + G_002*b^2 + G_003*b^3 + ...
B(r, g, b) = G_000 + G_001*b + G_002*b^2 + G_003*b^3 + ...
solve([ R(?,?,?) == ?, G(?,?,?) == ?, B(?,?,?) == ?,
..
R(?,?,?) == ?, G(?,?,?) == ?, B(?,?,?) == ? ],
R_000, R_001, R_002, R_010, ..., R_333,
G_000, G_001, G_002, G_010, ..., G_333,
B_000, B_001, B_002, B_010, ..., B_333)
where you need to fill in both the ...
and the known color mapping samples ?
. I don't know if SageMath can handle it, though.
Unfortunately, your initial premise is incorrect. Those colors are not mapped by any algorithm, but actual human artists. The mapping is not contiguous, so interpolation just won't work!
Mapping r5g5b5 to r8g8b8 is best done using a 215-entry color array. (For 24-bit color words, you'll need 98,304 bytes; for 32-bit, 131,072 bytes).
Instead of mapping colors directly, convert the source RGB to e.g. HSL (hue, saturation, and lightness), and apply any filtering in this space. (Say, increase saturation, maybe tint some hues, and so on.) Then, convert to the final color words. You do this for each possible input color once, precalculating the color map.
Color filtering in HSL colorspace is simpler, because it is closer to human perception.
Best Answer
Comment (but probably not solution) too long for the comment field: The series of the exponential gives an alternating series for the given function
$$e^{-x^2}=1-x^2+\tfrac12 x^2-\tfrac16x^4+\tfrac1{24}x^8+\dots+\tfrac{1}{k!}(-x^2)^k+\dots$$
By the quantitative statements of the Leibniz criterion,
$$e_{2n-1}(-x^2)\le e_{2n+1}(-x^2)\le e^{-x^2}\le e_{2n}(-x^2),$$
where $e_n$ is the $n$-th partial sum of the exponential series and the estimate is true for $x^2<2n$. This gives an absolute error of
$$\frac1{(2n)!}x^{4n}\left(1-\tfrac{x^2}{2n+1}\right)\le\left|e_{2n-1}(-x^2)- e^{-x^2}\right|\le\frac1{(2n)!}x^{4n}$$
which shows that in terms of polynomial approximation, one can not do much better than the partials of the exponential series. To get a small relative error, one now needs to chose $n$ large enough to get
$$\frac{e^{15^2}(15)^{4n}}{(2n)!}<ϵ$$
or about
$$2n(\ln(2n)-\ln(225e))>225+|\ln(ϵ)|$$
Which means that you need $n=406,...,419$ to get $ϵ=10^{-4},...,10^{-18}$
Using arithmetic rules of the exponential, one notices that $e^{-x^2}=\left(e^{-(\tfrac{x}{m})^2}\right)^{m^2}$, for instance with $m=15$, and the relative error $ϵ$ is guaranteed if the partial sum $e_n$ is taken such that the relative error of $e_n(-x^2)$ to $e^{-x^2}$ on $[-\tfrac{15}m,\tfrac{15}m]$ is smaller than $\tfrac{ϵ}{2m^2}$. With $m=15$ this requires the easier to control inequality
$$\frac{e^1}{(2n)!}<\tfrac{ϵ}{450}$$
which gives the usual precisions for $n=6,...,12$.
$e_{811}(-15^2)$ on Magma has a gross error, instead of ...e-98 it shows 8e73.
$e_{21}(-\tfrac{x^2}{225})^{225}$ has relative error 5e-19 at $x=15$.
If only basic operations are allowed, then we go back to basic divide and conquer or half-and-square in this case. Take in the above calculus $m$ a nice power of $2$, $m=16$ could work, but let's take $m=32$. Then the error estimate is still largest at the interval end, and the condition at the even larger $x=16$ is
$$\frac{e^{\frac14} \left(\frac14\right)^{2n}}{(2n)!}<\tfrac{ϵ}{2048} \iff \frac{\sqrt[4]e}{2^{4n-11}(2n)!}<ϵ$$
Trying out some small values leads to $n=5$ as a likely candidate, and indeed, $e_9(-\tfrac{15^2}{1024})\cdot\exp(15^2)-1\approx -1e-10$
$n=4$ with $e_7$ works as well with an actual error of about $2e-7$, with error bound of actually $1e-6$.
With less than 30 operations, replace 7 by 9 for higher accuracy:
or with a division