If $f(x)\,dx$ is a probability distribution with expected value $0$ and variance $1$, and the distribution of $X_i$ is
$$f\left(\dfrac{x-\mu_i}{\sigma_i}\right)\cdot\dfrac{dx}{\sigma_i},$$
and $X_i$ are independent, then certainly the distribution of $X_1+\cdots+X_n$ has expected value $\mu_1+\cdots+\mu_n$ and variance $\sigma_1^2+\cdots+\sigma_n^2$. Also, the higher cumulants would add together in the same way. (The fourth cumulant, for example, is $\mathbb E((X-\mu)^4) - 3(\mathbb E((X-\mu)^2))^2$, and the coefficient $3$ is the only number that makes this functional additive in the sense that the fourth cumulant of a sum of independent random variables is the sum of their fourth cumulants.)
We've tacitly assumed $\sigma_i<\infty$. I think if $\sum_{i=1}^\infty\sigma_i^2=\infty$, then as $n$ grows, the distribution would approach a normal distribution (I'm not recalling the appropriate generalization of the central limit theorem clearly enough to state it precisely.) But what happens for small $n$ is another question, and the answer would depend on what function $f$ is.
I said above that $f(x)\,dx$ has expectation $0$ and variance $1$. But one can also have perfectly good location-scale families in which the expectation, and a fortiori, the variance, do not exist. The most well-known case is the Cauchy distribution. The simplest result there is that $(X_1+\cdots+X_n)/n$ actually has the same Cauchy distribution as $X_1$ if these $n$ variables are i.i.d. It doesn't get narrower. So a lot depends on which function $f$ is.
Let's answer the first one.
If you know the PDF for $Z$, say $f_{Z}\left(z\right)$, then $f_{c\cdot Z}\left(c\cdot z\right)$ is found from the probability definition:
\begin{equation}
\begin{split}
\text{Pr}\left\{c\cdot Z < z \right\} &= \text{Pr}\left\{Z < \cfrac{z}{c} \right\} = F_{Z}\left(\cfrac{z}{c}\right) \quad \text{so}
\\
\cfrac{d}{dz}\left[F_{Z}\left(\cfrac{z}{c}\right)\right] &= \cfrac{1}{c} f_{Z}\left(\cfrac{z}{c}\right)
\end{split}
\end{equation}
So, applied to a chi-square, just scaled the PDF for $\chi_{N}^{2}$ by $\cfrac{1}{\sigma^{2}}$ and scale it's argument by the same $\cfrac{1}{\sigma^{2}}$ and plot it.
Best Answer
In the case where you have two variables, the following holds:
Suppose $X_1$ has Poisson distribution with parameter $\lambda_1$ and $X_2$ has Poisson distribution with parameter $\lambda_2$. Then if $X_1$ and $X_2$ are independent, the variable $X_1+X_2$ has Poisson distribution with parameter $\lambda_1+\lambda_2$.
This is intuitively clear if we regard the variables as relating to Poisson processes with common unit time. $X_1$ gives the number of events occurring in a unit time period where the average number of events per unit time period is $\lambda_1$. $X_2$ gives the number of events occurring in a unit time period where the average number of events per unit time period is $\lambda_2$. By the independence assumption, the total number of events from both processes occurring in a unit time period would be $X_1+X_2$, and the average number of these events per unit time period would be $\lambda_1+\lambda_2$. So, $X_1+X_2$ has Poisson distribution with parameter $\lambda_1+\lambda_2$.
Rigorously, we can compute the probability mass function, $p_Y$, of $Y=X_1+X_2$ as follows:
For our variables $X_1$ and $X_2$, we have for $i\ge0$: $$P[X_1=i]= {\lambda_1^i\over i!} e^{-\lambda_1}\quad\text{and}\quad P[X_2=i]= {\lambda_2^i\over i!} e^{-\lambda_2}.$$
Let $k\ge0$. Then: $$ \eqalign{ p_Y(k) &=\sum_{i=0}^kP[X_1=i,X_2=k-i]\cr &=\sum_{i=0}^kP[X_1=i]\cdot P[ X_2=k-i]\cr &=\sum_{i=0}^k{\lambda_1^i\over i!}e^{-\lambda_1}\cdot{\lambda_2^{k-i}\over(k-i)!}e^{-\lambda_2}\cr &=\sum_{i=0}^k{\lambda_1^i\lambda_2^{k-i}\over i!\,(k-i)!}e^{-(\lambda_1+\lambda_2)}\cr &= e^{-(\lambda_1+\lambda_2)}\cdot\sum_{i=0}^k{\lambda_1^i\lambda_2^{k-i}\over i!\,(k-i)!}\cr &={1\over k!}\cdot e^{-(\lambda_1+\lambda_2)}\cdot\sum_{i=0}^k{k!\over i!\,(k-i)!}\,\lambda_1^i\lambda_2^{k-i}\cr % &={1\over k!}\cdot e^{-(\lambda_1+\lambda_2)}\cdot\sum_{i=1}^k{k!\over i!\,(k-i)!}\,\lambda_1^i\lambda_2^{k-i}\cr &={(\lambda_1+\lambda_2)^k\over k!}\cdot e^{-(\lambda_1+\lambda_2)},\cr } $$ where the second equality above used the independence of $X_1$ and $X_2$ and the last equality used the Binomial Theorem.
So, $$ p_Y(k)= {(\lambda_1+\lambda_2)^k\over k!}\cdot e^{-(\lambda_1+\lambda_2)},\quad k\ge 0 ; $$ which we recognize as the Poisson distribution with parameter $\lambda_1+\lambda_2$.
The result for $n\ge 1$ now follows easily by induction.