I apologize in advance for my ignorance if this is a question with an obvious answer… I am not experienced in this field. But are such particles in the universe points with a charge, or are they very small spheres with a charge? Or does it not even matter in the end? This isn't homework, it's just curiosity.
[Physics] the “shape” of atomic/subatomic particles
particle-physics
Related Solutions
The question: "I'm wondering if there is some good reason why the universe as we know it has to have twelve particles rather than just four."
The short answer: Our current standard description of the spin-1/2 property of the elementary particles is incomplete. A more complete theory would require that these particles arrive in 3 generations.
The medium answer: The spin-1/2 of the elementary fermions is an emergent property. The more fundamental spin property acts like position in that the Heisenberg uncertainty principle applies to consecutive measurements of the fundamental spin the same way the HUP applies to position measurements. This fundamental spin is invisible to us because it is renormalized away. What's left is three generations of the particle, each with the usual spin-1/2.
When a particle moves through positions it does so by way of an interaction between position and momentum. These are complementary variables. The equivalent concept for spin-1/2 is "Mutually unbiased bases" or MUBs. There are only (at most) three MUBs for spin-1/2. Letting a particle's spin move among them means that the number of degrees of freedom of the particle have tripled. So when you find the long time propagators over that Hopf algebra you end up with three times the usual number of particles. Hence there are three generations.
The long answer: The two (more or less classical) things we can theoretically measure for a spin-1/2 particle are its position and its spin. If we measure its spin, the spin is then forced into an eigenstate of spin so that measuring it again gives the same result. That is, a measurement of spin causes the spin to be determined. On the other hand, if we measure its position, then by the Heisenberg uncertainty principle, we will cause an unknown change to its momentum. The change in momentum makes it impossible for us to predict the result of a subsequent position measurement.
As quantum physicists, we long ago grew accustomed to this bizarre behavior. But imagine that nature is parsimonious with her underlying machinery. If so, we'd expect the fundamental (i.e. before renormalization) measurements of a spin-1/2 particle's position and spin to be similar. For such a theory to work, one must show that after renormalization, one obtains the usual spin-1/2.
A possible solution to this conundrum is given in the paper:
Found.Phys.40:1681-1699,(2010), Carl Brannen, Spin Path Integrals and Generations
http://arxiv.org/abs/1006.3114
The paper is a straightforward QFT resummation calculation. It assumes a strange (to us) spin-1/2 where measurements act like the not so strange position measurements. It resums the propagators for the theory and finds that the strange behavior disappears over long times. The long time propagators are equivalent to the usual spin-1/2. Furthermore, they appear in three generations. And it shows that the long time propagators have a form that matches the mysterious lepton mass formulas of Yoshio Koide.
Peer review: The paper was peer-reviewed through an arduous process of three reviewers. As with any journal article it had a managing editor, and a chief editor. Complaints about the physics have already been made by competent physicists who took the trouble of carefully reading the paper. It's unlikely that someone making a quick read of the paper is going to find something that hasn't already been argued through. The paper was selected by the chief editor of Found. Phys. as suitable for publication in that journal and so published last year.
The chief editor of Found. Phys. is now Gerard 't Hooft. His attitude on publishing junk is quite clear, he writes
How to become a bad theoretical physicist
On your way towards becoming a bad theoretician, take your own immature theory, stop checking it for mistakes, don't listen to colleagues who do spot weaknesses, and start admiring your own infallible intelligence. Try to overshout all your critics, and have your work published anyway. If the well-established science media refuse to publish your work, start your own publishing company and edit your own books. If you are really clever you can find yourself a formerly professional physics journal where the chief editor is asleep. http://www.phys.uu.nl/~thooft/theoristbad.html
One hopes that 't Hooft wasn't asleep when he allowed this paper to be published.
Extensions: My next paper on the subject extends the above calculation to obtain the weak hypercharge and weak isospin quantum numbers. It uses methods similar to the above, that is, the calculation of long time propagators, but uses a more sophisticated method of manipulating the Feynman diagrams called "Hopf algebra" or "quantum algebra". I'm figuring on sending it in to the same journal. It's close to getting finished, I basically need to reread it over and over and add references:
http://brannenworks.com/E8/HopfWeakQNs.pdf
In high energy physics the energy scale is very important. As you said matter is probed at smaller and smaller distances, and that requires more energy. Why is that?
Well in natural units ($c = \hbar = 1$) we have some quantities that mix with each other, i.e there is very little difference between them (mainly just a proportionality constant) In particular:
$$ [Velocity] = number$$ $$ [Energy] = [Mass] = [Momentum] $$ and $$ [Mass] = [Length]^{-1} $$
From this it follows that $[Energy]$ is actually just inverse $[Length]$ hence the smaller the distances probed, the higher the energy scales.
If these above relations seem strange think of them like this. The highest achievable velocity is the speed of light $c$ and we already set that to one by our choice of natural units. This means that any other velocity will range from $0 \leq v \leq 1$ thus its a scalar.
Also, from $E^2 = (pc)^2 + (mc^2)^2$ it follows that $E^2 = p^2 + m^2$. The last one, which is the core of your question follows from the fact that $\hbar/(mc)$ has units of length and in natural units it becomes $m^{-1}$.
To finish up, your last point about gravity follows from the fact that gravitons interact very weakly at the energy scales we are probing because gravity only becomes relevant at extremely small distances of the order of the Planck Length,
$$\ell _{{\text{P}}}={\sqrt {\frac {\hbar G}{c^{3}}}}\approx 1.616\;199(97)\times 10^{{-35}}{\mbox{ m}}$$
This equates to huge energies that we have no access to currently. All this done above is called dimensional analysis.
Edit: To address the Higgs boson part of the answer:
Don't consider the Higgs boson as a fundamental interaction because it is not. The reason we need high energies to produce the Higgs is for a different reason. As others pointed out, the Higgs is an excitation of the Higgs field. The boson itself is very massive. Remember mass = energy. To produce a massive boson you need to supply at least enough energy to produce its mass. This kind of energy is not available in our everyday lives. Only the LHC has enough power to produce energy scales that high. But that doesn't affect other particles interacting with the Higgs field to gain mass.
Edit: Added small talk on gravity to address the OP's question in the comments:
For a subatomic particle, the gravitational effects are extremely small due to their tiny masses. For gravity to become relevant for individual particles we need to investigate them at the planck length scales. But gravity in general is relevant in the universe, and that is because astronomical objects are very massive and their combined mass produces gravitational fields that have observable effects.
Best Answer
continuing with @lusken 's answer, the atom is perceived as a fuzzy ball with a highly dense nucleus (mainly point size, compared to the size of the atom itself) and the fuzzy boundary because of the electron cloud.
The electron cloud themselves appear in different probability distribution, which gives different "shapes" to them.
EDIT1: where s is for electron with 0, p for spin 1 and so on d = 2 and f = 3. and each s,p,d,f have different suborbitals which are depicted as in the figure as