It turns out that it is the distribution of birth stellar masses and most importantly, the lifetimes of stars as a function of mass that are responsible for your result.
Let's fix the number of stars at 200 billion. Then let's assume they follow the "Salpeter birth mass function" so that $n(M) \propto M^{-2.3}$ (where $M$ is in solar masses) for $M>0.1$ to much larger masses. There are more complicated mass function known now - Kroupa multiple power laws, Chabrier lognormal, which say there are fewer low mass stars than predicted by Salpeter, but they don't change the gist of the argument.
Using the total number of stars in the Galaxy, we equate to the integral of $N(M)$ to get the constant of proportionality: thus
$$n(M) = 1.3\times10^{10} M^{-2.3}.$$
Now let's assume most stars are on the main sequence and that the luminosity scales roughly as $L = M^{3.5}$ ($L$ is also in solar units), thus $dL/dM = 3.5 M^{2.5}$.
We now say $n(L) = n(M)\times dM/dL$ and obtain
$$ n(L) = 3.7\times10^{9} M^{-4.8} = 3.7\times10^{9} L^{-1.37}.$$
The total luminosity of a collection of star between two luminosity intervals is
$$ L_{\rm galaxy} = \int^{L_2}_{L_1} n(L) L \ dL = 5.9\times 10^{9} \left[L^{0.63} \right]^{L_{2}}_{L_1}$$
This equation shows that although there are far more low-mass stars than high mass stars in the Galaxy, it is the higher mass stars that dominate the luminosity.
If we take $L_1=0.1^{3.5}$ we can ask what is the upper limit $L_2$ that gives $L_{\rm galaxy} = 1.3\times 10^{10} L_{\odot}$ ($=5\times10^{36}$ W)?
The answer is only $3.5L_{\odot}$. But we see many stars in the Galaxy that are way brighter than this, so surely the Galaxy ought to be much brighter?
The flaw in the above chain of reasoning is that the Salpeter mass function represents the birth mass function, and not the present-day mass function.
Most of the stars present in the Galaxy were born about 10-12 billion years ago.
The lifetime of a star on the main sequence is roughly $10^{10} M/L = 10^{10} M^{-2.5}$ years. So most of the high mass stars in the calculation I did above have vanished long ago, so the mass function effectively begins to be truncated above about $0.9M_{\odot}$. But that also then means that because the luminosity is dominated by the most luminous stars, the luminosity of the galaxy is effectively the number of $\sim 1M_{\odot}$ stars times a solar luminosity.
My Salpeter mass function above coincidentally does give that there are $\sim 10^{10}$ star with $M>1M_{\odot}$ in the Galaxy. However you should think of this as there have been $\sim 10^{10}$ stars with $M>1 M_{\odot}$ born in our Galaxy. A large fraction of these are not around today, and that is actually the lesson one learns from the integrated luminosity number you quote!
EDIT: A postscript on some of the assumptions made. The Galaxy is much more complicated than this. "Most of the stars present in the Galaxy were born 10-12 billion years go". This is probably not quite correct, depending on where you look. The bulge of the Galaxy contains about 50 billion stars and was created in the first billion years or so. The halo also formed early and quickly, but probably only contains a few percent of the stellar mass. The moderately metal-poor thick disk contains perhaps another 10-20% and was formed in the first few billion years. The rest (50%) of the mass is in the disk and was formed quasi-continuously over abut 8-10 billion years. (Source - Wyse (2009)). None of this detail alters the main argument, but lowers the fraction of $>1M_{\odot}$ stars that have been born but already died.
A second point though is assuming that the luminosity of the Galaxy is dominated by main sequence stars. This is only true at ultraviolet and blue wavelengths. At red and infrared wavelengths evolved red giants are dominant. The way this alters the argument is that some fraction of the "dead" massive stars are actually red giants which typically survive for only a few percent of their main sequence lifetime, but are orders of magnitude more luminous during this period. This means the contribution of of the typical low-mass main sequence stars that dominate the stellar numbers is even less significant than the calculation above suggests.
I think the following image, which comes from Tomczak et al. (2014) and the so-called ZFOURGE/CANDELS galaxy survey should do the trick.
It shows how the galaxy stellar mass function (i.e. the number of galaxies per unit mass per cubic megaparsec that have a certain stellar mass) evolves as a function of redshift. As you might imagine this is not just a case of counting galaxies and estimating their masses - you have to account for the fact that it is harder to see low-mass galaxies.
Anyway, these are their results and they clearly show that a galaxy like the Milky Way that has about 200 billion stars and a stellar mass of about $5\times10^{10}M_{\odot}$ (note that the total mass of the Milky Way is dominated by dark matter), is quite a massive galaxy (note the logarithmic y-axis).
In other words, small galaxies dominate the statistics. However, when you look at the Hubble Deep or Ultradeep fields, it is quite difficult to use this information. You will always tend to see the most luminous and most massive galaxies and the low-mass galaxies will not be represented as shown in the mass functions shown in this picture. So there are actually two separate things here, and I'm not sure I can definitively answer either.
(i) What is the average mass of a galaxy; (ii) what is the average mass of a galaxy seen in the Hubble Deep fields?
The answer to (ii) will obviously be much bigger than the answer to (i). Fortunately you can see from the plot that the straight(ish) line section below about the mass of the Milky Way are a power laws with slope $\sim -0.5$. That means that $M\Phi(M) \propto M^{+0.5}$ and when you integrate this over some range, it is the upper limit that dominates. So low-mass galaxies, do not dominate the stellar mass. In fact, it is galaxies about the size of the Milky Way that dominate the stellar mass. Galaxies with $M>10^{11}M_{\odot}$ (in stars) become increasingly rare, so these do not contribute so much. Therefore, very roughly, the number of stars in the Universe will be given by the number of galaxies with mass within a factor of a few of the Milky Way multiplied by the number of stars in the Milky Way.
I cannot provide an answer for the average mass for a galaxy in the UDF or any other survey volume because it is unclear how many of the lowest mass objects there are or what lower mass cut-off to work with. The plots shown for the CANDELS field below will be perfectly representative of the UDF or any other deep observation, the cosmic variance should not be an issue for order of magnitude estimates.
EDIT: As an example, let's take the average space density of $5\times 10^{10}M_{\odot}$ galaxies to be $10^{-2.5}$ per dex per Mpc$^{3}$ in the low redshift universe and assume galaxies over a 1 order of magnitude (1 dex) range of mass contribute almost all the stellar mass. If the observable universe if 46 billion light years ($\sim 15,000$ Mpc - see Size of the Observable Universe) and the average star is $0.25M_{\odot}$; there are:
$$N_* = 10^{-2.5} \times 5\times10^{10} \times \frac{4\pi}{3} \times (15000)^3/0.25 \simeq 10^{22}$$
stars in the observable universe.
Best Answer
Supernovae do occur all the time, if you are thinking on cosmic time scales. The number I have typically seen in the Milky Way is that supernova occur once per century, roughly. Maybe a little bit more often, maybe a little bit less. Relative to a the lifetime of a typical star, that is a very short timescale indeed.
Now, the relevant number of stars is not actually 400 billion. There are two basic ways of producing a supernova. One is with a very massive star. Massive stars, meaning stars at least about 10 times as massive as the Sun, are quite rare, relative to stars like the Sun and more low mass stars. The other is when a white dwarf experiences some sort of mass transfer (the details of which are still not understood) from another star, exceeds its maximum mass, and explodes. While binary star systems are quite common, not all binary stars are close enough that sufficient mass will be transferred from one star to the other.
You are correct that stars are dying all of the time, much more frequently than supernova are observed. The key is that when stars like our Sun die, they do not produce supernova.
One other thing worth noting is that we do not necessarily see all supernova. This seems on, on the face of it, ridiculous. Supernovae are extremely bright! However, since very massive stars produce supernovae, and also have very short lifetimes, some supernovae may be hidden because they occur while still surrounded by the remnants of the molecular cloud from which they formed. With more and more advanced telescopes, the likelihood of completely missing a supernova in the Milky Way now is pretty low, but there have been supernovae in the Milky Way in the past 400 years since SN 1604 that were not visible to the naked eye as a result of being blocked out by dust. The best known example is Cassiopeia A, which from its supernova remnant is known to have occurred in the mid to late 1600s, but was not recognized at the time. Looking around some more, it appears that another young supernova remnant has recently been identified (Youngest Stellar Explosion in our Galaxy Discovered) that was not seen by the naked eye.