There's, I'll adjust what @Bernard said slightly, almost no computation here. Well, it depends on whether you know an all-important secret or two.
There's the transition matrix, and on the other hand the change of basis matrix. They're inverses. (I tend to forget which is which).
But the matrix in this case is $$\begin{pmatrix}2\quad3\\3\quad 5\end{pmatrix}$$.
The reason is that it takes vectors expressed in terms of the basis consisting of the columns, and expresses them in the standard basis. This can be easily checked, by applying the matrix to the elements of the basis consisting in the columns, expressed in terms of itself. That means applying it to $\begin {pmatrix}1\\0\end {pmatrix}$ and $\begin{pmatrix}0\\1\end{pmatrix}$. And as you see out pop the columns, in the standard basis.
Now, back to what @Bernard said. Often one will need to go in the other direction. Thus you do need to compute the inverse of this matrix.
This is not a proper answer, but it might be useful anyway. It's just what I've found so far reading books and trying to make sense of everything I learned about vectors in both calculus and algebra.
You can geometrically picture vectors in $\mathbb{R}^n$ as arrows placed at the origin. Every vector can be uniquely expressed as a linear combination of $n$ linearly independent vectors, so for every basis of this vector space $(\vec{e}_1,\dots,\vec{e}_n)$ we can write:
$$
v=v^1 \vec{e}_1 + \cdots + v^n \vec{e}_n
$$
Although using a vector basis allows us to uniquely identify every vector in $\mathbb{R}^n$, it isn't really useful when trying to identify every "point" in $\mathbb{R}^n$ because of its limitations (your axes will have to be straight lines and will have to include the "canonical" origin).
In order to identify every point of $\mathbb{R}^n$ with more freedom, our first approach could be affine geometry. In $\mathbb{R}^n$ viewed as an affine space, we can define a coordinate system $(O,\mathcal{B})$, with $O$ a point in $\mathbb{R}^n$ and $\mathcal{B}$ a basis of $\mathbb{R}^n$ as a vector space. Axes are still straight lines, but now we can move from the origin to any other point $O$. This affine coordinate system is in a way a coordinate system, and it is definitely not the same as a basis — because $(O,\mathcal{B}) \neq \mathcal{B}$ — but we can do better.
We can define a system of $n$ equations (not necessarily linear, a system of linear equations would bring us back to the affine case) that uniquely identify every point in $\mathbb{R}^n$:
$$
x^i =\Phi^i(q^1,\dots,q^n), \space\space\space i=1,\dots,n
$$
This is precisely what we do when we define cylindrical or spherical coordinates: express $x,y,z$ in terms of three new variables ($\rho,\varphi,z$ and $r,\theta,\varphi$ respectively).
This means that the values $(q^1,\dots,q^n)$ will be our new coordinates. Note that, if this system were linear, we would need to require the coefficient matrix to have a non-zero determinant in order for this system to have a (unique) solution. For a general system of equations, by virtue of the implicit function theorem, the analogous condition is:
$$
\frac{\partial(x^1,\dots,x^n)}{\partial(q^1,\dots,q^n)} \neq 0
$$
i.e. the Jacobian of the transformation must be non-zero.
This new coordinates $(q^1,\dots,q^n)$ aren't related to any basis, but they induce one for every point $(q^1,\dots,q^n)$ in $\mathbb{R}^n$: the so-called coordinate basis of this coordinate system:
$$
\vec{v}_\mu = \frac{\partial\vec{\Phi}}{\partial q^\mu}
$$
In this sense, we can see than the components of the position vector at the point $p\in\mathbb{R}^n$ will be different from the coordinates of the point $p$ itself. The components depend on the coordinate basis (or any other basis which you define in terms of that one), while the coordinates of a point depend on the coordinate system itself.
In fact, now we're not talking about $\mathbb{R}^n$ as a vector space anymore, but this space does have a vector space attatched at every point, a vector space we call the tangent space at $p$: $T_p\mathbb{R}^n$. The coordinate basis at every point is the vector basis that our coordinate system induces for the tangent space at that point.
Note that the components of a vector (or a tensor, for that matter) may be called coordinates. I only see this when reading about pure vector spaces, without any sense of geometry, metrics or anything. For example, a matrix $A\in\mathcal{M}_{n\times n}$ which satisfies $\vec{v}_{\mathcal{B}'} = A\vec{v}_{\mathcal{B}}$ might be called a change of coordinates matrix (from the coordinates from $\mathcal{B}$ to the coordinates from $\mathcal{B}'$) or a change of basis matrix (from the basis $\mathcal{B}'$ to the basis $\mathcal{B}$, because it satisfies $\mathcal{B}'A=\mathcal{B}$, if we allow ourselves this abuse of notation).
Nonetheless, I always say components when referring to a vector (or tensor), and coordinates when referring to a point.
Best Answer
HINT
You have to find $$a,b,c,d\in \mathbb{R}$$ such that:
$$a(1-x)+b(1+x)+c(x^2-x^3)+d(x^2+x^3)=1+x+x^2+x^3$$