The mathematical definition is very easy. Two events $A$ and $B$ are independent if and only if $$P(A\cap B) = P(A)P(B).$$
In "pure" probability theory there's no interpretation of this, it's just a definition. It's a purely mathematical statement I can make about two events and a probability distribution.
To explain what it "means" you have to explain what probability means. There's no acceptable answer to this question. It's a big philosophical problem that mathematicians avoid by writing down some equations and solving them.
The motivation comes from the idea of conditional probability.
Suppose you throw a die. The probability you throw a six is $\frac 16$ and the probability you throw an even number is $\frac 12$. You can check with the formula above that the two events are not independent.
To get an idea of why suppose you throw a die but don't look at it. You want to get a six. I tell you whether or not it's even and you decide whether to keep it or roll it again. If I tell you it's odd then you know it's not a six and you roll it again. If I tell you it's even then there are only three numbers it could be and one of them is a six. So the probability that you got a six is now one in three. So you'd be crazy to throw again.
In maths we define conditional probabilities as follows$$P(A|B) = \frac{P(A\cap B)}{P(B)}.$$
Again in "pure" maths there's no interpretation of this, it's just a formula.
But in the real world $P(A|B)$ is the probability that $A$ happens if you already know that $B$ happened.
So the interpretation of independence is that $A$ and $B$ are independent if and only if $P(A|B) = P(A)$ if you know that $B$ happened it doesn't affect the probability that $A$ happened.
This concept makes intuitive sense to people. If my team is winning at half time it's more likely to win the game than if it wasn't, so not independent. If my team is winning at half time it doesn't make it less likely that it's going to rain tomorrow, so independent.
It's worth noting though that independence is an assumption that I might be wrong about. If my team happens to play well in the rain and they're winning at half time it's more likely to be raining during the game. This might mean it's more likely to be raining tomorrow. So they might not be independent.
So in fact a better definition of independence would be an assumption I make to simplify my model which is usually wrong, but hopefully not that wrong.
Given any $k\ge2$, for $k$ possible events, assume all combinations of an even number of events are equally likely, while an odd number of events has probability 0. One way to construct it, if $X_1,\ldots,X_k\in\{0,1\}$, draw $X_1,\ldots,X_{k-1}$ independently with probability $1/2$, and pick $X_k$ such that $\sum_{i=1}^k X_i$ is even.
A similar example can be made with $X_1,\ldots,X_k\sim\text{Uniform}[0,1]$. Draw $X_1,\ldots,X_{k-1}$ independently, and then let $X_k$ be that value in $[0,1)$ that makes $\sum_{i=1}^k X_i\in\mathbb{Z}$.
Best Answer
To specify a permutation where $a_k$ is the largest among $\{a_1,\dots,a_k\}$, you choose which $k$ numbers form the set $\{a_1,\dots,a_k\}$ in $\binom{n}k$ ways, you place the largest at the $k^{th}$ spot in $1$ way, you order the other elements in $(k-1)!$ ways, then you order the other $n-k$ elements in $(n-k)!$ ways. Therefore, $$ P(A_k)=\frac{\text{# of valid permutations}}{\text{total number of permutations}}=\frac{\binom{n}k(k-1)!(n-k)!}{n!}=\frac1k $$ Similarly, letting $k<\ell$, in order to specify a permutation where $a_\ell$ is the largest among $a_1,\dots a_\ell$ and $a_k$ among $a_1,\dots,a_k$, you
If you multiply all hose numbers out and divide by $n!$, wou will get $\frac{1}{k\ell}$.