Let $V$ denote the expected value.
Assume that you adopt the initial strategy that you will reroll if and only if you roll $A$ or lower.
Then
$$V = \frac{100 - A}{100} \times \left[ ~\frac{100 + (A + 1)}{2} ~\right] + \frac{A}{100} \left[ ~V - 1 ~\right]. \tag1 $$
(1) above needs some explaining.
The probability is $\dfrac{100-A}{100}$ that you will roll higher than $A$. When that happens, your average roll will be $\dfrac{100 + (A + 1)}{2}.$
The complementary event, not rolling above $A$, has probability of $~\dfrac{A}{100}~$ of occurring. When that happens, your expectation on the next roll will be $V - 1$, since you will have to pay $1\$$ to re-roll.
Now, suppose that you have rolled $n$ times, without exceeding $A$. What is your expectation on the $(n+1)$-st roll?
The die has no memory. The $(n-1)\$$ that you have paid so far is gone. So, the equation in (1) above still applies on the $(n+1)$-th roll.
Thus, the equation in (1) above always pertains, no matter how many re-rolls have been attempted. Therefore, the equation in (1) is the sole basis for determining the optimal value of $A$, regardless of how many re-rolls you take.
The idea is that any strategy that you adopt will be based on the idea that you re-roll if and only if you roll $A$ or lower.
Inherent in (1) above is the constraint that infinite re-rolls are available at $1\$$ each. This is built in to the
$$ \frac{A}{100} \left[ ~V - 1 ~\right] ~~\text{term}.$$
Addendum
Since the linked answer does not use the following approach to compute the optimal value of $A$, I will include it:
$$V = \frac{100 - A}{100} \times \left[ ~\frac{100 + (A + 1)}{2} ~\right] + \frac{A}{100} \left[ ~V - 1 ~\right] \implies $$
$$V ~\left[ ~1 - \frac{A}{100} ~\right] = \frac{100 - A}{100} \times \left[ ~\frac{100 + (A + 1)}{2} ~\right] + \frac{- A}{100}\implies $$
$$V ~\left[ ~\frac{100 - A}{100} ~\right] = \frac{100 - A}{100} \times \left[ ~\frac{100 + (A + 1)}{2} ~\right] + \frac{100 - A}{100} - 1 \implies $$
$$V = \left[ ~\frac{100 + (A + 1)}{2} ~\right] + 1 - \frac{100}{100 - A} \implies $$
$$V = \frac{103}{2} + \frac{A}{2} - \frac{100}{100 - A} \implies \tag2 $$
$$\frac{dV}{dA} = \frac{1}{2} - \frac{100}{\left( ~ 100 - A ~\right)^2}.$$
So
$$\frac{dV}{dA} = 0 \iff (100 - A)^2 = 200 \iff 100 - A = 10\sqrt{2} \approx 14.14.$$
This indicates that the optimal value of $A$ is either $85$ or $86$.
Using (2) above:
So, $A = 86$ is superior to $A = 85.$
Best Answer
Assuming for simplicity you are not allowed to play game infinitely (for example you are forced to quit if you roll $n$), then the game is "either take what you rolled or pay $1$ and play again". Lets say that we get maximum expected score using strategy $f$ "take roll if you rolled at least $k$, otherwise switch to strategy $g$".
$\mathbb E f = \frac{n - k + 1}{n} \cdot \frac{k + n}{2} + \frac{k - 1}{n} \left(\mathbb E g - 1\right)$
So, as $f$ gave as maximum score, we would be at least not worse if we switch back to $f$ instead of $g$. This gives us equation $x = \frac{n - k + 1}{n} \cdot \frac{k + n}{2} + \frac{k - 1}{n}\left(x - 1\right)$, so our score is $\frac{k^2 + k - n^2 - n - 2}{2(k - n - 1)}$. Maximizing it gets $k = n - \sqrt{2n} + 1$ (of course we need to round it up or down depending on $n$).