I doubt that sampling in Times Square will give unbiased information about New York City as a whole, but ignoring that:
A flat prior is one with a constant density, here on the interval $[0,1]$, so your prior is $\pi_0(\theta)=1$ for $0 \le \theta \le 1$
The likelihood here is proportional to the conditional probability that all three would say yes, which is $\theta^3$
So the posterior distribution is proportional to the product of these, i.e. also proportional to $\theta^3$ in $[0,1]$
I will leave it to you to find the constant of proportionality, mean, mode, median etc.
$$
\frac{\partial}{\partial M}\int p(x)(x-M)^2dx = 0,\\
\int p(x)\frac{\partial (x-M)^2}{\partial M}dx = 0,\\
\int p(x)(2(M-x))dx = 0,\\
2\int p(x)Mdx = 2\int p(x) xdx,\\
M\int p(x)dx = \int p(x) xdx,\\
M = \int p(x) xdx,\\
$$
Edit Explanation for non-math people
Imagine a simple distribution: $x=0$ with $p=0.2$ and $x=1$ with $p=0.8$. Let's take $M=0.5$ first, than the variance:
$$
\sum_x p(x)(x-M)^2 = 0.2\times 0.5^2 + 0.8\times 0.5^2 = 0.25
$$
What if we have increased $M$ by a tiny amount $0.001$? How much will it decrease the variance?
$$
\sum_x p(x)(x-M-0.001)^2 = \sum_x p(x)\left((x-M)^2-2(x-M)\times 0.001 + 0.001^2)\right) = \\
\sum_x p(x)(x-M)^2 + 2\times 0.001\times\sum_x p(x)(M-x) + 0.001^2\sum_x p(x)\\
= 0.25 + 2\times0.001\times(0.2\times0.5 - 0.8\times 0.5) + 10^{-6}
$$
Thus, neglecting $10^{-6}$, which is way smaller than the second term, we cay that variance will decrease by the second term ($-0.0006$). We can keep increasing $M$ and the variance will decrease until the second term is no longer negative. This happens when this term is exactly zero. Hence $\int p(x)(x-M)dx$.
What we did here is called differentiation. And the reason why we did this, is because if the derivative (the slope) exists at points of minimum, it is zero.
Best Answer
I think this is a good question, as it points to an interesting conceptual problem about what a parameter is. In general, if we talk about the probability density function (PDF) of a particular distribution, we usually refer to one established of infinitely many density functions, all of which would describe the distribution equally well. Let me explain this by some examples.
The normal distribution: $\sigma$ versus $\sigma^2$
First, notice that the normal distribution can be parameterized in different ways. At Wikipedia, for instance, you will find the parameterization $$ f_1(x \ | \ \mu,\sigma) \;, \tag{1} $$ where $\mu$ is called the mean and $\sigma$ the standard deviation. But many people prefer $$ f_2(x \ | \ \mu,\sigma^2) \;, \tag{2} $$ where $\sigma^2$ is called the variance. Since $f_1$ and $f_2$ have different signatures, they are two different functions, even though their definitions are identical: $$ f_1(x | \mu, \sigma) \ = \ \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp\left( -\frac{(x-\mu)^2}{2\sigma^2} \right) \ = \ f_2(x \ | \ \mu,\sigma^2) \;. \tag{3} $$
The normal distribution: precision $h$
Second, if I remember correctly, Gauß used the parameter $h$, which he called precision, and defined the PDF of the normal distribution as $$ f_3(x \ | \ \mu,h) = \dfrac{h}{\sqrt{\pi}} \exp\left( -h^2(x - \mu)^2 \right) \;. \tag{4} $$ It is yet another parameterization with a different measure of dispersion, and even though (2) looks different from (1), both equations describe the same distribution (to see this, set $h = \dfrac{1}{\sqrt{2\sigma^2}}$).
The normal distribution: mode and median
Third, notice that $\mu$ is not only the mean of the normal distribution, but it is also the mode and the median. So, if you ask for distributions that can be parameterized with the mode or the median, the normal distribution is again an example. It is just a matter of how you interpret $\mu$.
The log-normal distribution: geometric mean $m$
Let me conclude with the log-normal distribution. It is often defined by relating it to the normal distribution:
This is why the following PDF became the established function: $$ g_1(x \ | \ \mu,\sigma) = \dfrac{1}{\sqrt{2\pi\sigma^2x^2}} \exp\left( -\dfrac{(\ln(x) - \mu)^2}{2\sigma^2} \right) \;. \tag{5} $$ Notice, that $\mu$ and $\sigma$ do not correspond to the mean and the standard deviation of the log-normal distribution. Another, and as I think more natural, definition is $$ g_2(x \ | \ m,\sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2 x^2}} \exp\left( -\dfrac{\ln^2(\frac{x}{m})}{2\sigma^2} \right) \;, \tag{6} $$ where $m$ corresponds to the geometric mean.