You can't get there from here. The basis splines in your graph do not emerge as a straightforward algebraic manipulation of the equation you have supplied -- at least, not straightforward to me. But that's not where they come from.
The basis functions come from various theoretical results about B-splines. The spline function is the smoothest function that passes close to (or that interpolates) the sampled function values (the knot points). It can be shown that the solution to this optimization lies in a finite dimentional function space composed of piecewise polynomials -- the degree of which depends on how much smoothness you want. The kink in the polynomials happens at the knot points.
So now we go in search of sensible basis functions. In your case, you have piecewise linear functions .. so the simplest piecewise linear function is a tent function ... and the tent pole has to occur at a knot point, because that's where we get the break in differentiability. The basis functions supplied by R are not the only choice, but they produce nice band matrices, cheap and easy to invert and otherwise manipulate.
Note that your basis functions must also respect the boundary conditions of the problem you have set yourself. The basis functions above will only give functions with $s(0)=0$. If I add the constant function to my basis, I can interpolate functions with $s(0)=c$.
Now convince yourself that the tent functions shown above are a basis for the requisite spline space. Consider what happens when you add linear combinations of the functions you illustrated above: they will all have linear portions, with kinks at the knot points. None of these can be obtained from the others. Finally, you need to show that you have the right number of them (I can't remember the formula, off hand, for the dimension of the spline space in terms of the number of knots and the degree of the polys).
Smoother results would be obtained by increasing the degree of the polynomials -- you could have piecewise quadratics, or cubics (the usual choice) $\ldots$ and then your basis functions will look like a sequence of bells centered about the knot points.
The truncated polynomials from your equation can also be used to build the spline smoother or interpolant, but they do not have the attractive numeric properties of the tent functions ... so that's why R does not supply them.
I would not attempt to learn about spline functions from the references cited above. Ramsay and Hooker's Functional Data Analysis with R and Matlab ties in the theory with implementations in R. You could also dig up the original papers by Kimeldorf and Wahba on smoothing and interpolating splines.
Best Answer
Smoothing splines have all the knots (knots at each point), but then regularizes (shrinks the coefficients/smooths the fit) by adding a roughness penalty term (integrated squared second derivative times a smoothing parameter/tuning parameter) to the least squares criterion.
In one way, it's sort of analogous to a kind of "weighted" ridge regression, if you're prepared to regard the way the basis functions come into the penalty as weights.
Discrete versions of smoothing splines (which replace the integrated squared derivatives with summed squared differences) have a long history, dating back at least a century.
They're different from regression splines, but the two are related in various ways.
In between those you have penalized splines which have fewer than the full complement of knots but still use the roughness penalty to regularize (smooth) the fit.
I wouldn't normally regard splines as a way to transform variables but (among other things) to estimate functional relationships -- though if your interest is specifically in identifying some smooth transformation, they could be used for that.