I think it might be possible to come up with multiple different "correction" strategies since the requirements are too general for now.
Setup
Assume $t\in\mathbb{Z}$ denotes the time step. You start at $t=1$. The "base" probability of success at the time step $t$ is $p_t$ without any correction and the corrected version is $\hat p_t$. Let $X_t \in \{0, 1\}$ denote the outcome at time $t$, where $1$ is a success.
Probability of success
If you do not do any adjustment, the probability of succeeding at time step $t$ is $p_t$. You can write the probability of the outcome (whichever outcome you get) at time step $t$ to be $X_tp_t + (1-X_t)(1-p_t)$ which you can check gives you the correct answer irrespective of what $X_t$ is.
At time step $t$, the probability of exactly getting the sequence $X_1, X_2, ... , X_t$ (eg: $1, 0, 0, ... 1$) is (assuming independent flips) $\prod\limits_{i=1}^{t}\left(X_ip_i+(1-X_i)(1-p_i)\right)$. But since this is specific to a given order of outcomes, the probability monotonically decreases as $t$ increases (i.e. the number of possible sequences increase) as long as $p < 1$. At this stage, it is unclear how you would construct a probability question to ask some version of "how lucky the person was". You can try asking the probability of being as "lucky" as the person was or greater by asking the probability of getting as many successes the person got or more if you did not have varying $p$. This formulation for a time-dependent probability without a simple time-equation will require you to iterate over many different permutations (depending on how you define lucky) and might not be the best motivated question in your case anyway.
Considerations
Even in the above strategy, once you do come up with a "correction", it is difficult to decide whether you should use the uncorrected $p_t$ or the corrected version $\hat p_t$ to calculate future corrections. The former seems to be slightly better motivated since your aim is to find out how "lucky" the player has been in comparison to the base probabilities.
Easier Strategy
Since you only need a correction which roughly translates to the inverse of how lucky the person was, here is one way to do it. You find the series $E[X_1], E[X_2], ... , E[X_t]$ and then you "subtract" the sequence you obtained from it to obtain the sequence of deviations $p_1 - X_1, p_2 - X_2, ..., p_t - X_t$ (since $E[x_i]=p_i$, you can replace the former with the latter). This represents how much did the player deviate from the "expected" mean, negative deviation meaning the player was overly lucky. You can sum all the entries, and the sign of the sum tells you if the player was overly unsuccessful or overly successful. You can create a correction by adding a small portion of this sum to the next probability $p_{t+1}$. Ideally, you should also divide by the standard deviations to get a "normalized" version of the deviation from the mean of the individual sum elements. Here's one possible formula for the correction factor $c$ and some arbitrary small constant $a$.
$$c_{t+1} = \sum\limits_{i=1}^{t}\frac{p_i - X_i}{\sqrt{p_i(1-p_i)}}$$
$$\hat p_{t+1} = p_{t+1} + ac_{t+1}$$
(taking measures to keep $\hat p_{t+1}$bounded between $0$ and $1$) Selection of $a$ is in essence arbitrary, but here is a suggestion of how to come up with a value. You can simply set $a=\sqrt{p_{t+1}(1-p_{t+1})}$ which is the standard deviation of the $t+1$th step. You can keep a factor of $0<x \leq 1$ depending on how strict you want the correction to be.
Does this fulfill your requirements?
You can check out that this works for your given example. Note that the order of the successes do matter in this strategy. I am not sure what you mean by individual vs aggregate in the second point.
There are many specific properties of such a formula, and whether they suit you or some other suggestion might suit you better, depends on the details of the situation. From the broad phrasing of the question, I am assuming that an intuitively appeasing correction might work without the need of some specific statistical properties. You can try titrating the $a$ or $x$ to your needs, or try modifying the formula by taking the squared deviation from the mean divided by the variance, or forget the denominator entirely. All of these would change the statistical properties and may be better or worse (or equivalent) for your case.
The problem you describe seems to be suitable to be analyzed as a Markov chain.
To define a particular Markov chain, you want to have a state space, which is a set of discrete possible states numbered $1, 2, 3, \ldots, M$; you want to identify a sequences of discrete steps forward in time; and you want to say, if you are in state number $i$ at some point in time, what the probability is that you will be in state number $j$ at the next time step.
For every $i$ and $j$ in $1, 2, 3, \ldots, M$ there is such a probability
(which may be zero), and this probability stays the same at every time.
The way this can model the increasing chances of success that you have defined is by the transition from one state to another.
Let state $1$ be the starting state, state $2$ be a state you reach after your first failure, state $3$ be a state you reach after your second failure, and so forth.
If you have decided that after $f$ failures the probability of success stops increasing and you just continue trials at that same probability until you succeed, then you only need to define states up to state number $M = f + 1.$
In each state $i$ you have some probability of success (which means a transition to state $1$) and some probability of failure (which means a transition to state $i + 1$, except in state $M$ where you stay in state $M$).
You arrange the transition probabilities in a transition matrix so that the cell in row $i$ and column $j$ is the probability of transition to state $j$ if you are in state $i.$
For Scenario $3$, for example, the transition matrix is
$$ Q = \begin{pmatrix}
0 & 1 & 0 & 0 & 0 & 0 \\
0.15 & 0 & 0.85 & 0 & 0 & 0 \\
0.3 & 0 & 0 & 0.7 & 0 & 0 \\
0.45 & 0 & 0 & 0 & 0.55 & 0 \\
0.6 & 0 & 0 & 0 & 0 & 0.4 \\
0.75 & 0 & 0 & 0 & 0 & 0.25
\end{pmatrix}. $$
If you want to know the probability of landing in state $j$ after $n$ timesteps, starting in state $i$, you compute the $n$th power of the transition matrix,
$Q^n.$
In general a Markov chain lets you specify a probability distribution for your initial state, but in your case your starting state is state $1$ with $100\%$ probability, so the initial state vector is
$s_0=\begin{pmatrix}1 & 0 & 0 & 0 & 0 & 0\end{pmatrix}.$
The probability distribution after $n$ timesteps then is the vector
$s_0 Q^n.$
The stationary distribution of the Markov chain is a probability distribution $s$ among the states that stays the same after any number of timesteps, that is,
$s = sQ = sQ^2 = sQ^3,$ etc.
At least in the examples you are considering, if you start in state $1$ and continue taking time steps indefinitely, the probability distribution among the states will converge to the stationary distribution.
There is an algorithm to find the stationary distribution, not a simple one-line formula but still a deterministic algorithm taking a limited number of steps, not requiring simulation.
A calculator I found on line
(actually a downloadable Excel file)
calculated the steady-state vector as
$$\begin{pmatrix}0.25337 & 0.25337 & 0.21537 & 0.15076 & 0.08292 & 0.04422\end{pmatrix},$$
which says that if you continue long enough, the percentage of times you will be in state $1$ among all time steps will converge to $25.337\%.$
Since being in state $1$ means you had a success in the previous time step
(except in the initial state), I think it is reasonable to say that in the long run this process will have success $25.337\%$ of the time.
This matches your simulation results.
Best Answer
If $X$ is the random number of attempts needed to observe the first success, then $$\Pr[X \le x] = 1 - (1-p)^x,$$ where $p = \frac{1}{4096}$. This represents the probability of being successful within $x$ attempts. So for instance, if we want to compute the probability of being successful within $x = 4096$ attempts, this is $$\Pr[X \le 4096] = 1 - \left(1 - \frac{1}{4096}\right)^{4096} \approx 0.632165.$$ You have interpreted this to be the probability of remaining unsuccessful; instead, it means that on average, about $63.2\%$ of players would have at least one successful attempt in $4096$ tries.
The median number of attempts needed is given by $m$, where $$\Pr[X \le m] = 1 - (1 - p)^m = \frac{1}{2},$$ hence we require $$m = \frac{\log \frac{1}{2}}{\log (1-p)} \approx 2838.784,$$ but since $m$ must be a positive integer, we round up to get $m = 2839$ attempts needed. But all this says is that the chance of being successful within this many tries is 50-50.
The required number of attempts to have a $100(1-\alpha)\%$ chance of being successful is called a quantile (in this case, a percentile): it is $$\Pr[X \le q_{1-\alpha}] = 1 - (1 - p)^{q_{1-\alpha}} = 1 - \alpha$$ or $$q_{1-\alpha} = \left\lceil \frac{\log \alpha}{\log (1-p)} \right\rceil,$$ so for a $95\%$ chance, it is $$q_{0.95} = \left\lceil \frac{\log 0.05}{\log \frac{4095}{4096}} \right\rceil = 12270,$$ and for a $99\%$ chance, it is $$q_{0.99} = 18861.$$
That said, the fact that the outcomes of subsequent attempts are independent of previous attempts, if you have already made $a$ attempts and were not successful, this does not increase your chances of being successful in the future. In other words, if you tried $2000$ times and failed, on average, you will still need to try another $2839$ times to have a 50-50 chance. This is what we call the memoryless property.