I'm not sure how a p-value would come into it -- a p-value for a test of what hypothesis?
In any case, it doesn't really tell you about how confident you might be in the estimated value. You might compute a confidence interval for each the $\pi_i$.
If you're prepared to assume a multinomial model, the marginal distribution of counts in cell $i$ will be $\text{binomial}(n,\pi_i)$ ($n$ is the total observed, your $N$), and you can construct approximate binomial proportion confidence intervals in a plethora of ways of varying accuracy and simplicity.
In large samples it usually doesn't matter much which you pick, and then a straight normal approximation to the binomial is often used, and a two tailed $1-\alpha$ CI for $\pi_i$ would then be
$$p_i\pm Z_{1-\alpha/2}\cdot \sqrt{p_i(1-p_i)/n}$$
If $p_i$ is not very small or large and $n$ is large, a useful bound on a 95% CI is $p_i \pm 1/\sqrt{n}$ (it will always contain the interval above, but when the proportion is near 0 or 1 it can be much wider than the 'real' asymptotic interval).
However, the meaning of a confidence interval is a rather subtle concept, one that's frequently misunderstood.
Another possibility is to take a Bayesian approach and calculate a posterior distribution for the $\pi_i$ (from which a different kind of - and possibly more intuitive - interval might be produced, though they may often have very similar limits to a corresponding confidence interval).
Some discussion of a Bayesian interval
The usual Bayesian interval for this sort of thing would be a credible interval (/credible region for vectors of parameters).
I don't actually have a basic reference for you that I like.
I'll do a single parameter, no nuisance parameter case. The mathematics is simple but it may seem a little abstract, so I'll also do an example.
We have a probabilistic model for data ($\mathbf{x}$) with an unknown parameter $\theta$; in a simple case we might say $p(\mathbf{x}|\theta)$ is our model for the data (which might be a density or probability function for the data given the parameter).
The underlying idea is simply an application of Bayes rule:
$p(\theta|\mathbf{x})\propto p(\mathbf{x}|\theta) \cdot p(\theta)$
The second term on the right is called the prior on $\theta$; you take some prior for the parameter, multiply by the likelihood and normalize so it integrates to 1 (which is in simple cases ust a matter of recognizing the density).
In situations where there are nuisance parameters, those are integrated out: $p(\theta|\mathbf{x})=\int p(\theta,\mathbf{\phi}|\mathbf{x})d\mathbf{\phi}$
In "nice" cases this can all be done algebraically. In more complicated cases, we have to use other tools (MCMC is a common way to deal with fairly complicated models, for example, but numerical integration is sometimes used, or asymptotic approximations or any number of other tools).
Once you have your posterior, you can take an interval in it that contains $(1-\alpha)$ of the probability (has a total of $\alpha$ outside the interval). This might be done in any number of ways - e.g. equal tail probability (often used with more or less symmetric cases); the HPD interval (gives shortest intervals when the posterior is unimodal); or it might be done with reference to some loss function.
A common Bayesian approach to a multinomial problem is to assume a Dirichlet prior on the vector of proportions, which with the multinomial likelihood results in a Dirichlet posterior. (It's a common choice because among other things it has a neat property - it's what's called a conjugate prior, in that the posterior is of the same form.)
In the case of concern for the proportion in a single category, that corresponds to a $\text{Beta}(\alpha,\beta)$ prior for $\pi_i$ on a binomial count ($x_i$) in the category of interest.
This is easy enough to develop by hand:
$p(\pi_i|x_i)\propto p(x_i|\pi_i)\cdot p(\pi_i)$
$\qquad\quad\:\:\,\propto {\pi_i}^{x_i} (1-\pi_i)^{n-x_i} \cdot\, \pi_i^{\alpha-1}(1-\pi_i)^{\beta-1}$ (dropping constants of proportionality)
$\qquad\quad\:\:\,\propto \pi_i^{x_i+\alpha-1} (1-\pi_i)^{n-x_i+\beta-1}$ (which we recognize as a $\text{Beta}(x_i+\alpha,n-x_i+\beta)$ posterior)
A probability interval for $\pi_i$ is then obtained by choosing tail cutoffs that put the desired probability inside the interval.
Some more or less relevant links
http://en.wikipedia.org/wiki/Confidence_interval#Credible_interval
What's the difference between a confidence interval and a credible interval?
http://freakonometrics.hypotheses.org/18117
http://www.bayesian-inference.com/credible
Best Answer
There are two parts to your question - how to display discrete data (a data visualization issue) and how to do it in Python (a "what function do I call" issue).
I will deal with the first one.
With discrete distributions, there are a number of possible ways to display data.
Leaving aside direct implementation issues for the present, I see three main competitors:
the empirical cdf.
a sample probability function.
These are quite suitable for count data, for example.
a barplot.
This is quite suitable for ordered categories. If you order the bars from largest to smallest (or in some other meaningful-to-your-needs fashion), it's also suitable for unordered categories.
There are numerous other possibilities. However, I don't think a histogram is generally suitable for discrete data, especially not one where the bins are automatically chosen. The first problem is that a histogram density estimate uses area rather than height to convey relative probabilities, so it fairly directly conveys an impression of continuity. The second issue is with bin-width -- you need to choose it carefully or you may be doing things like having alternating bins either combining two categories or one, or perhaps having a smaller or larger gap between two categories than between the others (often an end-category):
As we see the gaps are not of constant width, throwing off the impression the plot conveys.
As for how you do things like this in python, after you choose a display, that would probably be a good, more specific question (but probably more on topic elsewhere; worded right it might fit better on StackOverflow, but you should check their help for what's on topic. With careful phrasing it might survive here, or it might work on Superuser.