Note that all limits need to be $\epsilon\rightarrow 0^+$ instead of $\epsilon\rightarrow\infty$. The other error ($\delta(0)=1$) has already been pointed out in Nimda's answer.
What you need to show is that the functions defined in (i) and (ii) in your question have an integral of unity, and that for $\epsilon\rightarrow 0^+$ they become zero for $t\neq 0$ and tend to infinity for $t=0$. E.g., for the first function you need to verify
$$\frac{\epsilon}{\pi}\int_{-\infty}^{\infty}\frac{1}{t^2+\epsilon^2}dt=1$$
and
$$\lim_{\epsilon\rightarrow 0^+}\frac{\epsilon}{\pi(t^2+\epsilon^2)}=\begin{cases}\infty,& t=0\\0,&t\neq 0 \end{cases}$$
I'm sure you can take it from here.
From the get-go we should be able to tell the equation (2) is incorrect, since it equals
$$\left(\int_{-\infty}^\infty p(x_1)dx_1\right)\cdots\left(\int_{-\infty}^\infty p(x_{n-2})dx_{n-2}\right)\int_{-\infty}^\infty p(x_{n-1})p(x-x_{n-1})dx_{n-1} \tag{2-bad}$$
which is $1\cdots 1\cdot P_2(x)$ instead of the $P_n(x)$ it's supposed to be.
The correct form is something that can be intuited. Since step sizes are independent events, the probability density of having step sizes $(x_1,\cdots,x_n)$ should be $p(x_1)\cdots p(x_{n-1})p(x_n)$ which we should then integrate over the hyperplane defined by the equation $x_1+\cdots+x_n=x$ in order to obtain the probability the $n$ steps sum to $x$. Such a domain of integration can be parametrized by letting the variables $x_1,\cdots,x_{n-1}$ roam freely and then setting $x_n=x-(x_1+\cdots+x_{n-1})$, yielding
$$\int_{-\infty}^\infty\cdots\int_{-\infty}^\infty p(x_1)\cdots p(x_{n-1})p\big(x-(x_1+\cdots+x_{n-1})\big) dx_{n-1}\cdots x_1. \tag{2-good}$$
We can derive this from (1) using substitution if we wish. Observe
$$P_n(x)=\int_{-\infty}^\infty p(x-u_1)P_{n-1}(u_1)du_1=\int_{-\infty}^\infty\int_{-\infty}^\infty p(x-u_1)p(u_1-u_2)P_{n-2}(u_2)du_2du_1$$
$$\cdots =\int_{-\infty}^\infty\cdots\int_{-\infty}^\infty p(x-u_1)p(u_1-u_2)\cdots p(u_{n-2}-u_{n-1})P_1(u_{n-1})du_{n-1}\cdots du_1 $$
and of course $P_1=p$. Make a change of variables
$$\begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_{n-2} \\ x_{n-1}\end{bmatrix}=\begin{bmatrix}u_1-u_2 \\ u_2-u_3 \\ \vdots \\ u_{n-2}-u_{n-1} \\ u_{n-1}\end{bmatrix}=\begin{bmatrix} 1 & -1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & -1 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & -1 \\ 0 & 0 & 0 & \cdots & 0 & 1 \end{bmatrix} \begin{bmatrix}u_1 \\ u_2 \\ \vdots \\ u_{n-2} \\ u_{n-1}\end{bmatrix}.$$
The Jacobian determinant of this upper triangular matrix is $1$, and $u_1=x_1+\cdots+x_{n-1}$, so the integral becomes precisely the one written in $(\text{2-good})$.
Best Answer
Firstly, OP is essentially asking the following question.
That is a good question. Below we will take a heuristic approach, and leave it to the reader to insert adequate assumptions to justify the various steps in the calculation. The answer is then
Obviously, the limits in eq. (B) might not exist.
Sketched proof of formula (B). Since each zero is isolated, it is enough to just consider one zero $a$. By translational symmetry of the Lebesgue measure, we can assume that the zero is at the origin $a=0$. It follows that it is enough to consider $$f(x)~=~Ax^n, \tag{C}$$ where $$A~=~\frac{f^{(n)}(0)}{n!}~\neq~ 0.\tag{D}$$ We can throw the constant $A$ outside of the Dirac delta distribution as $\frac{1}{|A|}$. Then $$I~=~\frac{1}{|A|}\int_{\mathbb{R}}\! dx~g(x)\delta(x^n) ~=~\frac{1}{|A|}\int_{\mathbb{R}_+}\! dx~[g(x)+g(-x)]\delta(x^n)$$ $$~\stackrel{y=x^n}{=}~\frac{1}{|A|}\int_{\mathbb{R}_+}\! \frac{dy}{ny^{1-\frac{1}{n}}}~[g(\sqrt[n]{y})+g(-\sqrt[n]{y})]\delta(y) ~=~\lim_{y\to 0^+}\frac{g(\sqrt[n]{y})+g(-\sqrt[n]{y})}{2n|A|y^{1-\frac{1}{n}}}, \tag{E}$$ which reproduces the familiar result $$I~=~\frac{g(0)}{|f^{\prime}(0)|}\quad\text{for}\quad n~=~1.\tag{F}$$ $\Box$
Secondly, OP is asking what is wrong with his second calculation? For starters, it seems that OP is forgetting the Jacobian factor under integration by substitution.