Randomness is not a property of a deck, it is a property of a probability distribution over decks. xkcd said it best:
This is not the algorithm you want. What you actually want to check is whether your algorithm, after $n$ iterations, produces a distribution over decks that is approximately uniform. Fortunately, someone has already done the work for you: Persi Diaconis showed that it takes about $7$ riffle shuffles.
If you don't want to rely on that result, just run your algorithm lots of times and keep track of which decks show up depending on how many times in a row you run it. (Probably you shouldn't keep track of decks themselves but rather the first $k$ cards in them for some reasonable $k$.)
There is no "measure" like the one you are looking for. The proof establishes a measure of the randomness of a probability function on all permutations, not a measure of the randomness of an individual permutation/configuration of the deck. No individual deck configuration is "more random" than another in any sense that is meaningful for probability and statistics.
Rather, we are measuring how far a probability function is away from an "ideal" probability function.
Given a finite set $X$ and a probability function $p:X\to\mathbb R^{\geq 0}$, we can define the distance from that probability function to the uniform probability function as a vector distnce. If $X=\{x_1,\dots,x_N\}$, then treat $p$ as the vector $(p(x_1),p(x_2),\dots,p(x_N))$ and compute the Euclidean distance of that vector to $(1/N,1/N,\dots,1/N)$, the uniform probability vector.
You might think that measure is arbitrary, but it is actually closely related to the "standard deviation." It is a remarkably useful measure of how far from "uniform" a probability function is.
(I'm not sure this distance is the distance used in the paper, I forget, but it is a useful example of the type of distance used to measure "How close to uniform randomness is this random variable?")
Now, in this case, you have $N=52!$, because $X$ is the space of all possible permutations of the cards. $52!$ is a huge number, and you are not going to be able to write out the vectors.
But that doesn't mean you can't compute estimates for this distance.
The model for $X$ in shuffling is the group of permutations $\Sigma_{52}$. And a single riffle shuffle has a probability function $r:\Sigma_{52}\to\mathbb R^{\geq 0}$ which is mostly zero - that is, most permutations cannot result from a single riffle shuffle. But the fact that $X=\Sigma_{52}$ is a group lets us do clever math to deal with repeated riffle shuffles. And we can get good estimates for our measurement.
The number of riffle shuffle outputs is roughly $2^{52}=4.5\times 10^{15}$. So the number of different outputs from $4$ riffle shuffles is at most $2^{52\cdot 4}=4.1\times 10^{62}<52!\approx 8.1\times 10^{67}$. Indeed, only one in ever rougly $20,000$ permutations can even be output after four shuffles. So if you want a chance for every permutation after shuffling, you'd have to shuffle at least $5$ times. But the riffle shuffle isn't uniform - each of its roughly $2^{52}$ outputs are not equally likely. You are likely, for example, to initially cut closer to the middle of the deck than near an end.
What the proof shows is that you get significantly closer to the ideal with each shuffle up to $7$, and then the progress slows. Additional shuffles do improve the measure, but not by much.
Best Answer
Often what people appear to mean when they informally say that something is "more random", they mean that the distribution has higher entropy. If $X$ is a discrete random variable, taking values in the set $\mathcal X$, the entropy $H(X)$ of $X$ is defined to be $$ H(X) = - \sum_{x \in \mathcal X} \mathbb P(X = x) \times \log(\mathbb P(X = x)), $$ where it is understood that $0 \times \log(0) = 0$.
For example, if $\mathcal X$ is finite, then the random variable that maximizes the entropy is the uniform distribution. This one is "most random". The random variable that minimizes the entropy is the one that takes a single value with probability 1: it has an entropy of 0, which is the "least random".
Consider your example of the sum of two die rolls. There are 11 possible outcomes, with probabilities $\frac1{36}, \frac1{18}, \frac1{12}, \frac19, \frac5{36}, \frac16, \frac5{36}, \frac19, \frac1{12}, \frac1{18}, \frac1{36}$. The entropy of this distribution is approximately $1.284$. If we instead consider the entropy of the uniform distribution on these 11 outcomes -- so we have the probability $\frac1{11}$ eleven times -- then the entropy is $$ - \sum_{i=1}^{11} \frac1{11} \log(1/11) = \log(11) \approx 3.298. $$
Drawing subsequent cards from the deck also gets less random. Initially there are 52 outcomes, which gives entropy $\log(52)$. After drawing one card, the entropy goes down to $\log(51)$ -- there are now 51 equally likely outcomes. And so on.
A die that cannot roll the same digit twice in a row also has lower entropy. Consider, for example, the space of all possible 5 roll sequences. On a standard die, this space has $6^5$ elements, and all are equally likely, for a total entropy of $5\log(6)$. If the same roll cannot appear twice in a row, the space only consists of $6 \times 5^4$ elements -- even if they are all equally likely the associated entropy is only $\log(6) + 4 \log(5)$.