First, the problem as stated needs additional assumptions to have a clear-cut solution. Make sure you understand why, as it's quite key to understanding how to properly model a real-life problem with probability theory.
My personal take would be to see this as a latent variable / Hidden Markov Model (HMM) situation. General intro on HMM here : http://en.wikipedia.org/wiki/Hidden_Markov_model.
You could introduce a variable $Y_n$ corresponding to the $n$-th shot's outcome, and a hidden variable (or latent state) $X_n$ corresponding to the player's skill at time $n$. Assuming the player's success at time $n$ only depends on $X_n$ and not on $Y_{< n}$ (e.g. our guy does not stress out if he missed...), and $X_n$ only depends on $X_{n-1}$, our situation is perfectly described by the HMM below (illustration source here) :
Now you still need to make a handful of modeling assumptions, namely :
- a value range for $X_n$ (for instance $\{u^1, u^2, u^3\}$ = $\{$bad, average, good$\}$)
- a transition probability function : $P(X_n | X_{n-1})$ (for instance $P(u^i | u^i) = \alpha = 2/3$ and $P(u^{i+1} | u^i) = 1-\alpha$ if we assume the player can only get better with time)
- an output probability function : $P(Y_n | X_n)$ (i.e. in above's notations $P(\text{success} | u^i)$ for all $i$'s).
Once you've done all this, your problem is one of inference, namely calculating $P(Y_n | y_1, ..., y_{n-1})$ (where we use the common convention: $Y$ is the random variable and $y$ is a realisation of $Y$). The method for resolving this is way beyond this site's scope of course ...
PS : That's just a very simple approach of HMM described in layman's terms. The roughest assumption made here is guessing/fixing the model's parameters once and for all. The bayesian way would be to leave all parameters (i.e. the $a_{m,k} = P(X_n = u^m | X_{n-1} = u^k)$ and $b_k = P(Y_n = \text{success} | X_n = u^k)$) open and then to calculate $P(Y_n | y_1, ..., y_{n-1})$ by averaging over the model's parameter space (given $y_1, ..., y_{n-1})$). This paper http://mlg.eng.cam.ac.uk/zoubin/papers/ijprai.pdf mentions this technique, look for the words "predictive distribution".
PPS : Another approach could be to see the situation as a mixture model (MM) problem. Wikipedia link : http://en.wikipedia.org/wiki/Mixture_model. It's far less appropriate than HMMs but you may find it easier to understand.
In a nutshell, the typical MM example is one of a bag of $K$ indistinguishable coins which are of two types, each type in an unknown proportion $a_1 K$ and $a_2 K$ and having a given unknown flip bias, say $b_1$ and $b_2$. You start to pick one coin at random, and you flip it $m$ times. Then you put the coin back in the bag and you start again, $n$ times. The goal is to determine best estimates of $\theta = (a_1, a_2 = 1-a_1, b_1, b_2)$ from the outcome of the experiment. Then, with your parameters estimated, it's easy to evaluate the distribution of your next marginal draw. The common technique to estimate $\theta$ is called Expectation maximization (EM) and is described quickly in the wikipedia link above. Intuitively, if you see in your data that in 20% (resp. 80%) of the draws, the proportion of faces is between 10% and 30% with an average of 22% (resp. between 55% and 70% with an average of 63%), you're tempted to conclude that $a_1 = 20\%$, $a_2 = 80\%$, $b_1 = 22\%$ and $b_2 = 63\%$. Also, if a draw has a majority of faces, it's highly probable that the coin in that draw was of type 2. EM formalizes that and does a bit more...
Now, you could say that in your problem the player is, for each game $n$, either Type 1 (good) or Type 2 (bad) (MM works with more than 2 categories but let's keep it at two), with the probability $p$ of him being good unknown to you, and that in each state his probability of success is $b_1$ and $b_2$. Run EM (or any other method), it will give you best estimates for $p$ and $b_i$. It will also give you, for each game $n$, the probability that the player was good in this particular game, say $q_n$. Here comes the fiddling : then plot $n \mapsto q_n$, hopefully it should be roughly increasing and come up with an extrapolation/approximation that will give you a guess (let's not call it an estimate) for $q_{n+1}$. A best guess for your probability of success in the future game $n+1$ is now $P(\text{success}_{n+1}) = q_{n+1} \hat b_1 + (1-q_{n+1}) \hat b_2$. Heck, it's really dirty, as some assumptions here are self-contradictory, but it's a start and my old physics teacher would have been satisfied with it.
With your latest remark, we should count the number of times a person will have to pay for playing the game before success, that is why I said that the probability will be much lower with returning to start after each failure. We shall need to proceed step by step, starting with, s_0, and moving step by step to ultimately get to s_4
The equations to be solved for this game will be as below:
The first equation can be understood as saying that with $1$ payment, there is a probability of $1/2$ of moving to $s_1$ (stage $1$) or returning to start, needing to pay again. And so on until you eventually jump from $s_3$ to success
$\displaylines {s_0 =1+ (1/2)*s_1 + (1/2)*s_0\\ s_1=1+ (1/2)*s_2 + (1/2)*s_0\\ s_2=1+(1/2)*s_3 + (1/2)*s_0\\s_3= 1 +(1/2)*s_0} $
This yields $s_0=30$, which means that the player will have to pay $30$ times on an average before "success" which implies a probability of $\Large\frac1{30}$ rather than $\Large\frac 1{16}$
Simplified answer
Instead of many equations, we can get one general equation
Denote the expected number of tosses to get a run of $n$ successive heads as $E(n)$
Then from $E(n-1)$, with one toss, we are either done, or with probability $1/2$, we have to start all over again, so
$E(n)= E(n-1)\;+1 + (1/2)*E(n)$
This simplifies to $E(n) = 2[E(n-1)\;+1],$ so
$E(1)=2,$
$E(2) = 2*3 = 6$,
$E(3) = 2*7 = 14$,
$E(4)= 2*15 = 30$
and $Pr = \Large\frac 1{30}$
Note that you can now easily extend the computations for larger runs, as you have hinted will be needed afterwards
Best Answer
I think you have the answer by your own method. As you said,the possiblity that game end after n times,should be $9/10*19/20*29/30*...*(10(n-1)-1)/(10(n-1))*1/10n$