[Math] Bayesian update from uniform prior to uniform posterior ?!

bayes-theorembayesiangame theory

I was working through a signaling game problem recently and the proof suggested the following:

Actor A has a type: $\ \mathscr{t} \sim Uniform[-1,1]$

Actor A gives signal $\pi^*$ that perfectly seperates types at $\pi^{*}$.
In other words, $pr(\pi^*|\mathscr{t}\in [-1,\pi^*])=1\ \&\ pr(\pi^*|\mathscr{t}\in [\pi^*,1])=0$ (this is the likelihood)

Actor B observes $\pi^*$, yielding posterior beliefs about actor A: $\mathscr{t} \sim Uniform[-1,\pi^*]$.

My question is as follows. It appears that this process, as i read it, has the same prior and posterior distributions (uniform), yet the likelihood distribution is unspecified and the uniform is not a conjugate prior for any common distribution. By my reasoning, the posterior distribution is not straightforwardly uniform since it is formed from a non-conjugate prior. Am I missing something here? Does it make sense to say $\mathscr{t}|\pi^{*} \ \sim Uniform[-1,\pi^*]$ or should some other distribution be specified? Alternatively is it possible that the answer to the problem contains an error?

Best Answer

There's no problem with the answer. Bayes formula:

$$ p(t|\pi^*) \propto p(\pi^*|t)p(t). $$

Since $p(\pi^*|t)$ is $1$ only when $t\in [-1, \pi^*]$ and $0$ otherwise, you get the resulting posterior.

The posterior is clearly not uniform on $[-1,1]$; you're misquoting the result on conjugate distributions. Player $B$ knows player $A$ signals only when $A$'s type is below $\pi^*$, and he updates accordingly

Related Question