Here you go:
alpha = 0.05;
NM = @(x,f)x-alpha*prod(([1,0]*f([x,1;0,x])).^[1,-1]);
f = @(x)32*x^6-48*x^4+18*x^2-x^0;
x = 0.5;
eps = 1;
while eps > 1e-5
xn = NM(x,f);
eps = abs(xn-x);
x = xn;
end
[x f(x)]
I caution you to not turn in this code to your professor. He will probably ask you how it works.
I provide this so that you may see how the method works. However, the underlying mechanics of how this particular code works are somewhat complicated and likely beyond your level. You will have to write your own function. If you turn this in, and the professor reviews your code, I guarantee you will get penalized for plagiarism.
In an earnest attempt at an answer, the above code includes one trick.
An iteration of Newton's method looks like this:
$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}.$$
However, if we plot our polynomial, we find that it oscillates somewhat like a sine wave centered at $x=0$. This causes a unique behavior of Newton's method: each projection of the slope throws the value to another part of the wave, and you get a "back and forth" action.
In order to accommodate this, we add a scaling parameter, $\alpha$:
$$x_{n+1} = x_n - \alpha\frac{f(x_n)}{f'(x_n)}.$$
Setting this scaling parameter such that $0 < \alpha < 1$ attenuates the effect of the slope. This slows down the convergence, but it ensures that we don't project "too far" ahead and get caught in an infinite loop.
Examine the difference between alpha = 0.05
and alpha = 1
.
Best Answer
The method is easiest to justify in one dimension. Say that I have some complicated function $f(x)$ whose root I want to find:
"I don't know how to find its root; it's complicated!" Thus, we use a general idea that has always been used in the design of numerical methods:
One of the simplest functions one can deal with is a linear function:
$$f(x)=mx+b$$
In particular, if you want the root of a linear function, it's quite easily figured:
$$x=-\frac{b}{m}$$
Now, it is well-known (or at least, ought to be) that the tangent line of a function is the "best" linear approximation of a function in the vicinity of its point of tangency:
The first idea of the Newton-Raphson method is that, since it is easy to find the root of a linear function, we pretend that our complicated function is a line, and then find the root of a line, with the hope that the line's crossing is an excellent approximation to the root we actually need.
Mathematically, if we have the tangent line of $f(x)$ at $x=a$, where $a$ is the "starting point":
$$f(x)\approx f(a)+f^\prime(a)(x-a)=0$$
If we want $x$, then
$$x=a-\frac{f(a)}{f^\prime(a)}$$
Let's call this $x_1$.
As you can see, the blue point corresponding to the approximation is a bit far off, which brings us to the second idea of Newton-Raphson: if at first you don't succeed, try again:
As you can see, the new blue point is much nearer to the red point. Mathematically, this corresponds to finding the root of the new tangent line at $x=x_1$:
$$x_2=x_1-\frac{f(x_1)}{f^\prime(x_1)}$$
We can keep playing this game (with $x_2, x_3, \dots x_n$), up until the point that we find a value where the quantity $\dfrac{f(x_n)}{f^\prime(x_n)}$ is "tiny". We then say that we have converged to an approximation of the root. That is the essence of Newton-Raphson.
As an aside, the previous discussion should tip you on what might happen if the tangent line is nearly horizontal, which is one of the disastrous things that can happen while applying the method.