Dimensional regularization (i.e., dim-reg) is a method to regulate divergent integrals. Instead of working in $4$ dimensions where loop integrals are divergent you can work in $4-\epsilon$ dimensions. This trick enables you to pick out the divergent part of the integral, as using a cutoff does. However, it treats all divergences equally so you can't differentiate between a quadratic and logarithmic divergence using dim-reg. All it really does is hide the fine-tuning, not fix the problem.
As an example lets do the mass renormalization of $\phi^4$ theory. The diagram gives,
\begin{equation}
\int \frac{ - i \lambda }{ 2} \frac{ i }{ \ell ^2 - m ^2 + i \epsilon } \frac{ d ^4 \ell }{ (2\pi)^4 } = \lim _{ \epsilon \rightarrow 0 }\frac{ - i \lambda }{ 2} \frac{ - i }{ 16 \pi ^2 } \left( \frac{ 2 }{ \epsilon } + \log 4 \pi - \log m ^2 - \gamma \right)
\end{equation}
where I have used the ``master formula'' in the back of Peskin and Schoeder, pg. A.44 (note that this $ \epsilon $ doesn't have anything to do with the $ \epsilon $ in the propagator). This gives a mass renormalization of
\begin{equation}
\delta m ^2 = \lim _{ \epsilon \rightarrow 0 } \frac{ \lambda }{ 32 \pi ^2 } \left( \frac{ 2 }{ \epsilon } + \log 4 \pi - \log m ^2 - \gamma \right)
\end{equation}
Keeping only the divergent part:
\begin{equation}
\delta m ^2 = \lim _{ \epsilon \rightarrow 0 } \frac{ \lambda }{ 16 \pi ^2 } \frac{ 1 }{ \epsilon }
\end{equation}
This is the same result as the one you arrived at above, but uses a different regulator. You regulated your integral using a cut-off, I did using dim-reg. The mass correction diverges as $ \sim \frac{1}{ \epsilon }$. This is where the sensitivity to the UV physics is stored.
A cutoff, which is a dimensionful number, tells you something very physical, the scale of new physics. The $\epsilon$ is unphysical, just a useful parameter.
With a cutoff, depending on how badly your divergence is, you will get different scaling with the cutoff; it will be either logarithmic, quadratic, or quartic (which has real physical significance, namely, how sensitive the result is tothe high energy physics). However, dim-reg regulated integrals always diverge the same way, like $ \frac{1}{ \epsilon } $. Dim-reg doesn't care how your integral diverges. It can be a logarithmically divergent integral but using dim-reg you will still get a $ \frac{1}{ \epsilon }$ dependence. The reason for this is that $ \epsilon $ is not a physical quantity here. Its just a useful trick to regulate the integrals.
Since dim-reg hides the type of divergences that you have, people like to say that dim-reg solves the fine-tuning problem, because by using it you don't get to see how badly your divergence is. This viewpoint is clearly flawed since the quadratic divergences are still there, they just appear to be on the same footing as logarithmic divergences when you use dim-reg.
In short the fine-tuning problem isn't really fixed using dim-reg but if you use it then you can pretend the problem isn't here. This is by no means a solution to the fine-tuning, unless someone develops an intuition for why dim-reg is the ``correct'' way to regulate your integrals, i.e., a physical meaning for $ \epsilon $ (which its safe to say there isn't one).
Let us suppose that that the Standard Model is an effective field theory, valid below a scale $\Lambda$, and that its bare parameters are set at the scale $\Lambda$ by a fundamental, UV-complete theory, maybe string theory.
The logarithmic corrections to bare fermion masses if $\Lambda\sim M_P$ is a few percent of their masses. The quadratic correction to the bare Higgs mass squared is $\sim M_P^2$. A disaster! - Phenomenologically we know that the dressed mass ought to be $\sim -(100 \,\text{GeV})^2$.
You are right that the SM is in any case renormalisable: our calculations are finite regardless of our choice of $\Lambda\to\infty$. But we have many reasons to believe that we should pick $\sim M_P$.
Also, if there are new massive particles, their contributions to the RG cannot be absorbed into the bare mass; they will affect the RG for the renormalised running mass.
PS apologies if I've repeated things you know and have written in the question.
Best Answer
Whether you do your calculations using a cutoff regularization or dimensional regularization or another regularization is just a technical detail that has nothing to do with the existence of the hierarchy problem. Order by order, you will get the same results whatever your chosen regularization or scheme is.
The schemes and algorithms may differ by the precise moment at which you subtract some unphysical infinite terms etc. Indeed, the dimensional regularization cures power-law divergences from scratch. But the hierarchy problem may be expressed in a way that is manifestly independent of these technicalities.
The hierarchy problem is the problem that one has to fine-tune actual physical parameters of a theory expressed at a high energy scale with a huge accuracy – with error margins smaller than $(E_{low}/E_{high})^k$ where $k$ is a positive power – in order for this high-energy theory to produce the low-energy scale and light objects at all.
If I formulate the problem in this way, it's clear that it doesn't matter what scheme you are using to do the calculations. In particular, your miraculous "cure" based on the dimensional regularization may hide the explicit $\Lambda^2$ in intermediate results. But it doesn't change anything about the dependence on the high-energy parameters.
What you would really need for a "cure" of the physical problem is to pretend that no high-energy scale physics exists at all. But it does. It's clear that the Standard Model breaks before we reach the Planck energy and probably way before that. There have to be more detailed physical laws that operate at the GUT scale or the Planck scale and those new laws have new parameters.
The low-energy parameters such as the LHC-measured Higgs mass 125 GeV are complicated functions of the more fundamental high-energy parameters governing the GUT-scale or Planck-scale theory. And if you figure out what condition is needed for the high-scale parameters to make the Higgs $10^{15}$ times lighter than the reduced Planck scale, you will see that they're unnaturally fine-tuned conditions requiring some dimensionful parameters to be in some precise ranges.
More generally, it's very important to distinguish true physical insights and true physical problems from some artifacts depending on a formalism. One common misconception is the belief of some people that if the space is discretized, converted to a lattice, a spin network, or whatever, one cures the problem of non-renormalizability of theories such as gravity.
But this is a deep misunderstanding. The actual physical problem hiding under the "nonrenormalizability" label isn't the appearance of the symbol $\infty$ which is just a symbol that one should interpret rationally. We know that this $\infty$ as such isn't a problem because at the end, it gets subtracted in one way or another; it is unphysical. The main physical problem is the need to specify infinitely many coupling constants – coefficients of the arbitrarily-high-order terms in the Lagrangian – to uniquely specify the theory. The cutoff approach makes it clear because there are many kinds of divergences that differ and each of these divergent expressions has to be "renamed" as a finite constant, producing a finite unspecific parameter along the way. But even if you avoid infinities and divergent terms from scratch, the unspecified parameters – the finite remainders of the infinite subtractions – are still there. A theory with infinitely many terms in the Lagrangian has infinitely many pieces of data that must be measured before one may predict anything: it remains unpredictive at any point.
In a similar way, fine-tuning required for the high-energy parameters is a problem because using the Bayesian inference, one may argue that it was "highly unlikely" for the parameters to conspire in such a way that the high-energy physical laws produce e.g. the light Higgs boson. The degree of fine-tuning (parameterized by a small number) is therefore translated as a small probability (given by the same small number) that the original theory (a class of theory with some parameters) agrees with the observations.
When this fine-tuning is of order $0.1$ or even $0.01$, it's probably OK. Physicists have different tastes what degree of fine-tuning they're ready to tolerate. For example, many phenomenologists have thought that even a $0.1$-style fine-tuning is a problem – the little hierarchy problem – that justifies the production of hundreds of complicated papers. Many others disagree that the term "little hierarchy problem" deserves to be viewed as a real one at all. But pretty much everyone who understands the actual "flow of information" in quantum field theory calculations as well as the basic Bayesian inference seems to agree that fine-tuning and the hierarchy problem is a problem when it becomes too severe. The problem isn't necessarily an "inconsistency" but it does mean that there should exist an improved explanation why the Higgs is so unnaturally light. The role of this explanation is to modify the naive Bayesian measure – with a uniform probability distribution for the parameters – that made the observed Higgs mass look very unlikely. Using a better conceptual framework, the prior probabilities are modified so that the small parameters observed at low energies are no longer unnatural i.e. unlikely.
Symmetries such as the supersymmetry and new physics near the electroweak scale are two major representatives of the solution to the hierarchy problem. They eliminate the huge "power law" dependence on the parameters describing the high-energy theory. One still has to explain why the parameters at the high energy scale are so that the Higgs is much lighter than the GUT scale but the amount of fine-tuning needed to explain such a thing may be just "logarithmic", i.e. "one in $15\ln 10$" where 15 is the base-ten logarithm of the ratio of the mass scales. And this is of course a huge improvement over the fine-tuning at precision "1 in 1 quadrillion".