You can't get there from here. You need to start with a different model. I would keep the weekly snapshots and build a stochastic model around transitions in each student's state variable. Suppose there are 10 weeks, which gives 11 "decision'' points, $t_0, t_1, \ldots, t_n$. The state at $t_i$ is $(Z_i,S_i)$, where $Z_i$ is 1 or 0, according as the student is enrolled or not; and $S_i$ is the score at that point (the sum of test and homework scores to date). Initial values are $(1,0)$. You have two transitions to worry about: $Prob(z_i=0|s_{i-1})$ and the distribution of $S_i$.

The dropout probabilities are not stationary, since you will get a binge of dropouts just before the final drop-without-penalty date. But you can estimate these from past data.You can also estimate the probability of dropping out as a function of current (dismal) performance.

The $S$ scores are a random walk on a binomial outcome (number of correct answers on a test of $n$ items, say). You can probably assume conditional independence -- assume a latent "talent" parameter for each student, and conditional on that value, each new score is independent of current performance. You could test this assumption against your historical data ... do failing students change their study habits and pull off a win? But most students behave true to form ... so a conditionally independent model should work OK.

So basically, a student fails if a $Z$ score transitions to 0, or the $S$ score fails to cross the 70\% pass threshold.

Let's look more closely at the $S$ process. To simplify the model, assume that evaluation involves obtaining 70 points or more from a total of 100 possible points, obtained from 10 test items each week.

At baseline, a student's pass probability is simply the pass rate of the previous class.

At time 1, the student has earned $S_1$ points (or dropped out). He passes if he can earn at least $70-S_1$ points out of 90. this is a binomial problem, which I can easily calculate if I know the student's probability of success. This will no longer be the "class average"; I need to adjust in light of the student's success thus far. I would use a table from past experience for this, but you could do a weighted average of the overall class success rate and the student's personal success. Bayes' Rule should help here.

As a bonus, you can calculate a range of probabilities, which should narrow as the term progresses. In fact, strong students will cross the 70\% mark before the end of term, and their success will be certain at that point. For weak students, failure will also become certain before the end.

RE: question 3. Should you go to continuous time? I wouldn't, because that puts one in the realm of continuous time stochastic processes and the math involved is above my pay grade. Not only that, you are unlikely to get a substantially different outcome.

The best way to upgrade the model I have outlined is not to go to continuous time, but to adjust the transition probabilities on the basis of prior experience. Perhaps weak students fall further behind than an independence model would predict. Incorporating inhomegeneity would improve the model more than going from discrete to continuous time.

## Best Answer

It simply is

probability, you can call it "predicted" as suggested by others.I see from the discussion that you disagree with such name, so let me proove you that this

isprobability.First, recall that if $X$ is a Bernoulli distributed random variable parametrized by $p$, then $E(X) = p$. Second, take an intercept-only logistic regression model, such model will calculate

meanof your predicted $Y$ variable. This would be the same as if you calculated it simply taking $\hat y_i = (1/N) \sum_{i=1}^N y_i$. This mean would converge to expected value as $N\rightarrow\infty$, i.e. to $E(Y)= p$. In fact, sample mean is a maximum likelihood estimator of $p$ for Bernoulli distributed random variable. In case of more complicated logistic regression model you predict conditional means, i.e. conditional probabilities.Check also Why isn't Logistic Regression called Logistic Classification?

If this still does not convince you, below you can see simple R example showing exactly that case: