Typical implementations of quicksort don't append the pivot to one of the lists. Instead, they are implemented (in Haskell notation) as
qsort [] = []
qsort xs = qsort left ++ [pivot] ++ qsort right
where pivot = head xs
left = filter (<x) xs
right = filter (>=x) xs
which keeps the symmetry between the left and right lists (if the list elements are distinct) and results in the same number of comparisons for an initial list that is sorted increasing or sorted decreasing.
Note that the 'worst case' isn't just found for the lists that are sorted in increasing or decreasing order. For example, the following lists all require six comparisons to sort:
[1,2,3,4]
[1,2,4,3]
[1,4,2,3]
[1,4,3,2]
[4,3,2,1]
[4,3,1,2]
[4,1,2,3]
[4,1,3,2]
In general, any ordering which results in the lest or greatest element being selected as the pivot at every stage of the recursion will give you worst case performance.
There are $\binom{N}{n+1}$ ways to choose the first $n+1$ elements of the list. There are exactly $n$ ways to order these so that the only pair out of order is the last pair: we must pick one of the $n$ elements other than the largest, put it at the end, and precede it by the remaining $n$ elements in increasing (i.e., sorted) order. Finally, there are $(N-n-1)!$ ways to permute the remaining elements of the list. Thus,
$$f_N(n)=n\binom{N}{n+1}(N-n-1)!=\frac{nN!(N-n-1)!}{(n+1)!(N-n-1)!}=\frac{nN!}{(n+1)!}\;.$$
Best Answer
The $k$the element has probability $1/n$, not $1/(n-k+1)$. For example, for the second element, you need the first element to not be a match, which has probability $(n-1)/n$. Then when you multiply by $1/(n-1)$ for the second element you get $1/n$. So really the complexity is $(1/n)\sum_{k=1}^n k$ which is roughly $n/2$.