Today I heard a thermodynamic argument for about this. Since there is little work done by the system in solid and liquid phase. The heat capacity must be (roughly) same for the solid and liquid phase.
This one does not convince me at all. Especially, because no work is done by the system in the case one considers the constant volume heat capacity $c_V$. I think the argument you heard was more likely that $c_V \approx c_p$ for liquids and solids (as their volume expansion coefficients are small, and thus $W = p \Delta V$ is small in the constant pressure case), while the values differ relevantly for gases (as they expand relevantly).
So I googled and found that one do expect that heat capacity of liquid be more (i.e. for given heat small temp change) than that of the solids and gases as the majority of contribution to heat capacity, in solids come from 3 vibrational degrees of freedom and 3 translational degrees of freedom while in liquids both are significant hence the energy gets distributed in 3 + 3 = 2 × 3 degrees of freedom. Hence we expect some liquid (at least).
While the I do not see how the math fits here, this argument is much more convincing. At high temperatures (depending on the system, this usually means above some $10\,\text{K}$), each vibrational degree of freedom takes energy $T$, each translational degree of freedom takes energy $T/2$, rotational degrees of freedom do also take energy $T/2$.
In a solid state system we have a number of phonon modes, depending on the number of atoms $n$ in the unit cell (specifically $3n$). Thus the high temperature limit of $c$ (per amount of substance) will be $3n$ (when counted per unit cell, as with ionic solids), in the case of water it will be $3$ (although water ice has more than one molecule per unit cell).
As an atom cannot rotate (without being electronically excited, which takes lots of energy), the number of rotational degrees of freedom depends on the form of the molecule (a two atom molecule having two rotations, a non-linear three atom molecule having three rotations).
This gives for gases: $c = 3/2$ for atomic gases, $c = 5/2$ for handles and $c = 3$ for three atom molecules (like water). (Vibrations of molecules usually have higher energies than our environment, thus the degrees of freedom are "frozen out").
In a liquid it is more complicated (and I am not sure about the correctness of my statements here), but we could probably argue we have three translational and vibrational degree of freedom (the longitudinal phonon) rotations are usually irrelevant in a liquid due to dense packing. Additionally, in liquid water energy can be dispensed by breaking hydrogen bonds (and exactly this effect causes the negative volume expansion coefficent of water near $0^\circ C$).
As you can see this seems to predict that $c_s = c_g$ for water but not that $c_l = 2c_s$. In other words it shows, that the case of the liquid is much more complicated. Overmore it shows that the factors are not random, but also, that the rules are not general for all materials (but depend on the structure).
Especially, you can not get away with simple counting of degrees of freedom (as the energy in the high temperature limit depends on the kind of degree of freedom).
We should start by acknowledging that "degree of freedom" has at least two meaning in physics and engineering terms. One—used in mechanical design—is essentially equivalent to a mechanical generalized coordinate for the system. Another—the more common meaning in thermal physics—is the dimensionality of the system's phase space.
Neither of these thing is quite what you should be counting for the purposes of the equipartition theorem.
The thing that the equipartition theorem counts is contributions to the Hamiltonian that are quadratic in either a generalized coordinate or a generalized momentum.
A 1D spring has the freedom to move in only one direction (i.e. one mechanical degree of freedom using the engineering definition), but has a two dimensional phase space $(x,p_x)$ (i.e. two degrees of freedom in a common but to my ear sloppy usage), and the Hamiltonian is quadratic in both parameters
$$ H = \frac{1}{2}k x^2 + \frac{p_x^2}{2m} \,,$$
so it has two contributions for the purposes of equipartition.
In a model solid like the one you exhibit each atom (in a $D$ dimensional space) can be naively associated with $D$ mechanical degrees of freedom, $2D$ parameters in phase space, and $2D$ quadratic modes in the Hamiltonian.
As a side note it is not necessarily true that each mechanical degree of freedom results in two contributions to the equipartition theorem. The most handy counterexample being the ideal gas.
Best Answer
Firstly, some definitions. A plasma consists of various "species" of particle, such as electrons or nuclei (I will not discuss electron-positron or quark-gluon plasmas here, but you can probably infer many of the details). Furthermore, there may be different species of nuclei, such as different elements (e.g. Helium or Hydrogen) and different charge states of a given element (e.g. He 1+ or He 2+). In general, plasmas may be out of thermodynamic equilibrium and hence the different species may have different temperatures.
Let me also define what is typically meant by a heat capacity. A heat capacity is the amount of thermal energy required to raise the temperature of an amount of substance. For convenience, I will use the heat capacity per unit volume as follows: $\varepsilon$ is the amount of thermal energy per unit volume at a given temperature. Therefore we are looking for a heat capacity of the form $\varepsilon=C(T)$, where $C$ is some mathematical function. It is straightforward to then get to a heat capacity per unit mass or per particle.
As you have pointed out, a non-ionized gas of particles has a heat capacity as follows (note I am using constants more common to plasma physics): $$\varepsilon=\frac{3}{2}n k_B T \tag{1}$$
However, some species may be fermions (as in the case of electrons) with a heat capacity defined through the following relation:
$$\varepsilon_{fermion}=4\pi\left(\frac{2m_e c^2}{h^2 c^2}\right)^{3/2}\int_0^\infty \frac{x^{3/2}}{\exp[(x-\mu)/T]+1} dx \tag{2}$$
Here the physical constants have their usual meaning, $n_{fermion}$ is the density of the particles and the chemical potential $\mu=\mu(n_{fermion},T)$. Please read the appropriate literature on the chemical potential and the Fermi-Dirac distribution. These formulas arise from the quantum nature of fermions (only one allowed per quantum state). If this looks complicated, please bear in mind that at high temperatures or low densities, Equation (2) reduces to Equation (1).
Similarly, the species of boson in a plasma satisfy the following equation:
$$\varepsilon_{boson}=4\pi\left(\frac{2m c^2}{h^2 c^2}\right)^{3/2}\int_0^\infty \frac{x^{3/2}}{\exp[(x-\mu)/T]-1} dx$$
This is from the Bose-Einstein distribution.
Aside from the contribution of the thermal energy discussed above, the other main heat capacity for plasmas are the ionization energies of ions. There is an energy $E_i$ required to ionize an electron from an atom. For hydrogen, this is called the Rydberg energy $E_i=\mathrm{Ry}=13.6$ eV. For other elements, there are varying ionization energies, most of which are determined experimentally or using complicated computer codes. In any case, the resulting heat capacity arises as the sum of the density of each ionization stage multiplied by the ionization energy:
$$\varepsilon_{ion}=\sum_i E_i n_{ion} \tag{3}$$
Here $n_{ion}$ is the density of each type and charge of ion, such as H$^{1+}$, Fe$^{20+}$ etc. Note that the fractions of each charge state are strong functions of both total density and temperature and hence rapidly change based on plasma conditions. For example, at low temperatures an iron plasma may be almost completely neutral, but at a high temperature it may consist of significant fractions of Fe$^{19+}$, Fe$^{20+}$ and Fe$^{21+}$. These can be solved for using the Saha-Boltzmann equation or a collisional-radiative model.
I cover most of what I say in my PhD thesis, which is freely available here and which contains further references should you be interested: http://etheses.whiterose.ac.uk/13357/ All the formulas are explained in detail there. Chapter 3 and possibly 4 are the most relevant. Also note that within there, I use plasma physics units conventions (explained in the Appendix) where $k_B=1$, so temperatures are in electronvolts.
A brief example: Hydrogen atoms at a density of $10^{20}$ cm$^{-3}$ are 60% ionized. This means that there are $6\times 10^{19}$ cm$^{-3}$ electrons and H$^{1+}$ ions and $4\times 10^{19}$ cm$^{-3}$ neutral H atoms. We can ignore quantum effects and use Equation (1) for each of the three species. Then we use Equation (3) with only one term in the sum, the above $n_{\mathrm{H}^{1+}}$ and $E_{\mathrm{H}^{1+}}=\mathrm{Ry}$.
So in summary, a plasma would have a higher heat capacity than an equivalent un-ionized gas (if that were possible), since there are more particles (free electrons) and an energy cost of ionization.