A standard way to solve this is to consider simultaneously the mean number of rolls $t_n$ needed to reach $n$ for every nonnegative integer $n$ (and to remember at the end that the OP is asking for $t_{30}$). So, let us do that...
For every $n\leqslant0$, $t_n=0$. For every $n\geqslant1$, considering the result of the first roll, one sees that
$$t_n=1+\frac16\sum_{k=1}^6t_{n-k}
$$
Thus, the series $$T(s)=\sum_nt_ns^n$$ solves
$$T(s)=\frac s{1-s}+\frac16\sum_{k=1}^6s^kT(s)$$ from which it follows that
$$T(s)=\frac{6s}{(1-s)\left(6-\sum\limits_{k=1}^6s^k\right)}=\frac{6s}{6-7s+s^7}$$
To extract the coefficient $t_{30}=[s^{30}]T(s)$ of $s^{30}$ in $T(s)$, rewrite this as
$$T(s)=s\left(1-rs\left(1-\frac t7\right)\right)^{-1}=\sum_{n=0}^\infty r^ns^{n+1}\left(1-\frac t7\right)^n$$
where $$r=\frac76\qquad t=s^6$$
hence
$$[s^{30}]T(s)=\sum_{k=0}^4r^{5+6k}[t^{4-k}]\left(1-\frac t7\right)^{5+6k}$$
or, equivalently,
$$[s^{30}]T(s)=\sum_{k=0}^46^{-5-6k}[t^{4-k}]\left(7-t\right)^{5+6k}=\frac7{6^5}\sum_{k=0}^4(-1)^kr^{6k}{5+6k\choose4-k}$$
that is,
$$t_{30}=\frac7{6^5}\left(5-165r^6+136r^{12}-23r^{18}+r^{24}\right)
$$
which can be "simplified" into the exact result $$t_{30}=
\frac{333366007330230566034343}{36845653286788892983296}$$ with a numerical approximation $$t_{30}\approx
9.047634594384022902065997942672796588425278684184104625$$
Edit: To get some estimates of $t_n$ when $n\to\infty$, note that $$T(s)=\frac s{(1-s)^2Q(s)}$$ where $$Q(s)=\frac16(6+5s+4s^2+3s^3+2s^4+s^5)$$ Furthermore, $(1-s)Q(s)=1-\frac16\sum\limits_{k=1}^6s^k$ has no zero in the closed unit disk except the simple zero $s=1$ hence $Q(s)$ has no zero in the closed unit disk. This implies that $$T(s)=\frac a{(1-s)^2}+\frac b{1-s}+R(s)$$ for some given $(a,b)$ and some rational fraction $R(s)=\sum\limits_nr_ns^n$ with no pole in the closed unit disk. Then, there exists some finite $c$ and some $\varrho$ in $(0,1)$ such that, for every $n$, $$|r_n|\leqslant c\varrho^n$$ This yields, again for every $n$, $$|t_n-a(n+1)-b|=|r_n|\leqslant c\varrho^n$$ Equivalently, $$t_n=an+(a+b)+O(\varrho^n)$$ To identify $(a,b)$, note that $$(1-s)^2T(s)=a+b(1-s)+(1-s)^2R(s)$$ hence $$a=\left.(1-s)^2T(s)\right|_{s=1}=\frac1{Q(1)}$$ and $$b=-\left.\frac d{ds}[(1-s)^2T(s)]\right|_{s=1}=-\frac1{Q(1)}+\frac{Q'(1)}{Q(1)^2}$$ or, equivalently, $$a+b=\frac{Q'(1)}{Q(1)^2}$$
Finally, $Q(1)=\frac72$ and $Q'(1)=\frac{35}6$ hence $a=\frac27$ and $a+b=\frac{10}{21}$, which implies
$$t_n=\frac27n+\frac{10}{21}+O(\varrho^n)$$
Edit-edit: More generally, throwing repeatedly a "die" producing a random number of points distributed like $X$, following the same route, one gets
$a=E(X)$ and $a+b=Q'(1)/E(X)^2$ with $Q'(1)=\frac12E(X(X-1))$, hence, for some $\varrho_X$ in $(0,1)$ depending on the distribution of $X$, $$t_n=\frac1{E(X)}n+\frac{E(X(X-1))}{2E(X)^2}+O(\varrho_X^n)$$
The problem you describe seems to be suitable to be analyzed as a Markov chain.
To define a particular Markov chain, you want to have a state space, which is a set of discrete possible states numbered $1, 2, 3, \ldots, M$; you want to identify a sequences of discrete steps forward in time; and you want to say, if you are in state number $i$ at some point in time, what the probability is that you will be in state number $j$ at the next time step.
For every $i$ and $j$ in $1, 2, 3, \ldots, M$ there is such a probability
(which may be zero), and this probability stays the same at every time.
The way this can model the increasing chances of success that you have defined is by the transition from one state to another.
Let state $1$ be the starting state, state $2$ be a state you reach after your first failure, state $3$ be a state you reach after your second failure, and so forth.
If you have decided that after $f$ failures the probability of success stops increasing and you just continue trials at that same probability until you succeed, then you only need to define states up to state number $M = f + 1.$
In each state $i$ you have some probability of success (which means a transition to state $1$) and some probability of failure (which means a transition to state $i + 1$, except in state $M$ where you stay in state $M$).
You arrange the transition probabilities in a transition matrix so that the cell in row $i$ and column $j$ is the probability of transition to state $j$ if you are in state $i.$
For Scenario $3$, for example, the transition matrix is
$$ Q = \begin{pmatrix}
0 & 1 & 0 & 0 & 0 & 0 \\
0.15 & 0 & 0.85 & 0 & 0 & 0 \\
0.3 & 0 & 0 & 0.7 & 0 & 0 \\
0.45 & 0 & 0 & 0 & 0.55 & 0 \\
0.6 & 0 & 0 & 0 & 0 & 0.4 \\
0.75 & 0 & 0 & 0 & 0 & 0.25
\end{pmatrix}. $$
If you want to know the probability of landing in state $j$ after $n$ timesteps, starting in state $i$, you compute the $n$th power of the transition matrix,
$Q^n.$
In general a Markov chain lets you specify a probability distribution for your initial state, but in your case your starting state is state $1$ with $100\%$ probability, so the initial state vector is
$s_0=\begin{pmatrix}1 & 0 & 0 & 0 & 0 & 0\end{pmatrix}.$
The probability distribution after $n$ timesteps then is the vector
$s_0 Q^n.$
The stationary distribution of the Markov chain is a probability distribution $s$ among the states that stays the same after any number of timesteps, that is,
$s = sQ = sQ^2 = sQ^3,$ etc.
At least in the examples you are considering, if you start in state $1$ and continue taking time steps indefinitely, the probability distribution among the states will converge to the stationary distribution.
There is an algorithm to find the stationary distribution, not a simple one-line formula but still a deterministic algorithm taking a limited number of steps, not requiring simulation.
A calculator I found on line
(actually a downloadable Excel file)
calculated the steady-state vector as
$$\begin{pmatrix}0.25337 & 0.25337 & 0.21537 & 0.15076 & 0.08292 & 0.04422\end{pmatrix},$$
which says that if you continue long enough, the percentage of times you will be in state $1$ among all time steps will converge to $25.337\%.$
Since being in state $1$ means you had a success in the previous time step
(except in the initial state), I think it is reasonable to say that in the long run this process will have success $25.337\%$ of the time.
This matches your simulation results.
Best Answer
Success rate is, first of all, a rate (of success). So naturally its $$ \frac{\#\{\text{succesful events}\}}{\#\{\text{successful events} + \text{unsuccessful events}\}} ~~(= 0.04 = 4\%)$$
here $\#$ denotes a number (count) of events.