Does anyone know if the Gumbel can occur as a limit distribution for such a sum?
When we have $n$ exponential distributed variables $X_i \sim Exp(\gamma = i)$, (with expectation $1/i$ and variance $1/i^2$) then the sum
$$S = \sum_{i=1}^n (X_i - 1/i)$$
approaches a Gumbel distribution.
There is a connection between this sum and the maximum order statistic.
We can see this sum as the waiting time for filling $n$ bins when the filling of the bins is a Poisson process.
- Approach with the sum. The waiting time between the filling of bins bin is exponential distributed. For waiting until one bin is filled, since all bins are empty the rate is $n$. The waiting time for a second bin to be filled is when $n-1$ bins are empty and the rate will be $n-1$, and so on...
- Approach with the maximum. We can consider the waiting times for filling each individual bin. The waiting time to fill all bins is equal to the maximum of the individual waiting times.
The distribution of the maximum of exponential distributed variables approaches a Gumbel distribution. Therefore the expression in terms of a sum, which has an equal distribution, will also approach the Gumbel distribution.
See also Intuition about the coupon collector problem approaching a Gumbel distribution on Cross Validated.
This is of course not general.
If we use $X_i = N(\mu = 1/i, \sigma^2 = 1/i^2)$ then a (properly scaled) sum will approach a normal distribution.
That is a trivial example but there are more cases that will converge to a normal distribution. The relevant condition that needs to be fulfilled is the Lyapunov condition.
$\newcommand{\X}{\mathbf X}\newcommand{\x}{\mathbf x}\newcommand{\de}{\delta}\newcommand{\R}{\mathbb R}$Your understanding of the purposes of the bootstrap is largely incorrect.
Here is what the foundational paper by Efron that introduced the bootstrap method says:
We discuss the following problem: given a random sample $\X=(X_1,X_2,\dots,X_n)$ from an unknown probability distribution $F$, estimate the sampling distribution of some prespecified random variable $R(\X,F)$, on the
basis of the observed data $\x$.
We see that here the (say real-valued) random variable (r.v.)
\begin{equation*}
Y:=R(\X,F)
\end{equation*}
in question is, not a function of just the sample $\X$, but also of the unknown probability distribution $F$ (of each $X_i$).
To estimate the distribution of the r.v. $Y=R(\X,F)$ by the bootstrap method, we obtain (usually by computer simulation) a large number, say $B$, of (desirably/approximately) independent random samples
\begin{equation*}
\x^*_1=(x^*_{1,1},\dots,x^*_{1,n}),\dots,\x^*_B=(x^*_{B,1},\dots,x^*_{B,n})
\end{equation*}
from the empirical distribution
\begin{equation*}
F_\x=\frac1n\sum_{i=1}^n\de_{x_i}
\end{equation*}
corresponding to the observed sample $\x=(x_1,\dots,x_n)$, where $\de_x$ is the Dirac delta measure supported on the singleton set $\{x\}$. So, the $Bn$ r.v.'s $x^*_{j,i}$ with $j=1,\dots,B$ and $i=1,\dots,n$ are (desirably/approximately) iid each with the distribution $F_\x$.
Here $n$ is large enough so that the empirical distribution $F_\x$ be close enough to the true but unknown distribution $F$, and $B$ is very large -- which is affordable because of the large computer power we have nowadays.
Then, for each $j=1,\dots,B$, in the formula $Y=R(\X,F)$ we replace the r.v. $\X$ and the unknown distribution $F$ by the known $\x^*_j$ and $F_\x$, respectively, to get
\begin{equation*}
y^*_1:=R(\x^*_1,F_\x),\dots,y^*_B:=R(\x^*_B,F_\x).
\end{equation*}
Because the empirical distribution $F_\x$ is close enough to the true but unknown distribution $F$, the empirical distribution
\begin{equation*}
\frac1B\sum_{j=1}^B\de_{y^*_j} \tag{1}\label{1}
\end{equation*}
will be somewhat close to the desired unknown distribution of $Y=R(\X,F)$ -- if the function $R$ is continuous in an appropriate sense. The empirical distribution \eqref{1} is called the bootstrap distribution of $Y=R(\X,F)$.
The simplest example of the function $R$ in Efron's paper is given by
\begin{equation*}
R(\X,F)=\bar X-\int_\R x\,F(dx),
\end{equation*}
where $\bar X:=\frac1n\sum_{i=1}^n X_i$ and $F$ is the Bernoulli distribution with an unknown parameter.
As noted by Efron, clearly in this case one does not need any simulation to find/estimate the mean and the variance of the distribution of $R(\x^*_1,F_\x)$ -- they are obviously $0$ and $\bar x(1-\bar x)$, respectively, where, of course, $\bar x:=\frac1n\sum_{i=1}^n x_i$.
Of course, neither the true distribution of $Y=R(\X,F)$ nor its bootstrap distribution will be even approximately normal in general. However, recall that the bootstrap distribution is random (or, at least, pseudo-random), even for a given realization $\x$ of the random sample $\X$ -- because the bootstrap distribution depends on the (pseudo-)random bootstrap samples $\x^*_1,\dots,\x^*_B$. Therefore, for each realization $\x$ of $\X$, the bootstrap distribution of $R(\X,F)$ (being the empirical distribution \eqref{1} (based on $y^*_1,\dots,y^*_B$)) will satisfy a central limit theorem for empirical measures -- see e.g. Dudley and subsequent papers.
Best Answer
In such generality, virtually nothing can be said about the asymptotic distribution of $V_{Np}:=\sum_{i=1}^p X_{Ni}^2$ or even about the existence of such an asymptotic distribution. In particular, $V_{Np}$ may have a non-normal asymptotic distribution or no asymptotic distribution at all.
Indeed, consider the following three simple settings.
Setting 1: All the $X_{Ni}$'s are iid standard normal and $c_p^\top c_p=1$. Then $Y_{Np}:=c_p^\top X_N\sim N(0,1)$ and $V_{Np}\sim\chi^2_p\approx N(p,2p)$ (as $p\to\infty$), so that $V_{Np}$ is asymptotically normal.
Setting 2: $X_{N1}\sim N(0,1)$, $X_{N2}=\cdots=X_{Np}=0$, and $c_p=[1,0,\dots,0]^\top$. Then $Y_{Np}=X_{N1}\sim N(0,1)$ and $V_{Np}=X_{N1}^2\sim\chi^2_1$, so that the asymptotic distribution of $V_{Np}$ is not normal.
Setting 3: This is a combination of Settings 1 and 2: for odd $N$ we use Setting 1, and for even $N$ we use Setting 2. Then there is no asymptotic distribution at all.