Yes, it is just the chain rule for entropy.
The chain rule for entropy is Theorem 2.2.1 in Cover and Thomas.
$$H(X,Y) = H(X) + H(Y|X)$$
You can use the chain rule when you condition on another random variable $Z$. In this case, you get
$$H(X,Y|Z) = H(X|Z) + H(Y|X,Z)$$
The proof is word for word the same as the proof of the original chain rule. In this case, write each of the probabilities conditioned on $z$. For example, write $p(x,y|z)$ instead of $p(x,y)$.
Or, alternatively, you can write something like this (as they suggest in Eqn. (2.20)):
\begin{align*}
H(X,Y|Z)
&= -E[\log p(X,Y|Z)] \\
&= -E[\log p(X|Z)] - E[\log p(Y|X,Z)] \\
&= H(X|Z) + H(Y|X,Z)
\end{align*}
With this version of the chain rule for conditional entropies, you get the two equations that you wanted by applying it to the random variables $X, \hat{X}, E$.
We have the following closely related notions:
- entropy (the information value)
- probability distribution (which outcomes should we already expect?)
- uncertainty (are we certain of the outcome, or will we learn something)
Low entropy
When we get a highly expected piece of information, we are already almost certain of the content and gained hardly any information value. Hence high probability, low uncertainty, low entropy.
Similarly, when we do NOT get a very unexpected piece of information, we were almost certain not to get it. NOT getting it contains low information value. Hence low probability, low uncertainty, low entropy.
High entropy
When we get a highly unpredictable uniformly random "flip of a coin"-like piece of information, we did not expect it and were quite uncertain of what it would be. The information value is very high! Hence almost 50/50 probability, high uncertainty, high entropy.
Example
Suppose you were to guess an English word. Then consider the expected value of getting answers to the following questions:
- Does it contain the letter "E"?
- Does it contain the letter "Z"?
You should expect the answers to be "MAYBE" and "NO". A randomly chosen English word has around $p\approx1/8=12.5\%$ probability of containing the letter "E" whereas "Z" is quite rare (let us say $p\approx1/64$). If we use those figures, we have:
$$
\begin{align}
I[E]&=
-1/8\cdot\log_2(1/8)=3/8&&=0.375\\
I[not\ E]&=-7/8\cdot\log_2(7/8)&&\approx0.169\\
H[E\ not\ E]&=I[E]+I[not\ E]&&\approx0.543
\end{align}
$$
and
$$
\begin{align}
I[Z]&=
-1/64\cdot\log_2(1/64)=6/64&&\approx0.094\\
I[not\ Z]&=-63/64\cdot\log_2(63/64)&&\approx0.022\\
H[Z\ not\ Z]&=I[Z]+I[not\ Z]&&\approx0.116
\end{align}
$$
Hence we will most likely learn something (dividing our candidate words into 1/8 portions) when getting an answer to the first question, whereas we will probably not learn much from confirming that "Z" is not contained in the word, only excluding a very small set of candidates (1 in 64).
Ideal type of question
A yes/no or true/false question like this has the potential of bisecting the candidate space into equal parts, so if we could ask the right question and be sure to either include or exclude half of the candidate words we would gain 1 bit of information. The ideal type of question should have a coin flip 50/50 probability.
Best Answer
Typesetting comment: you should use
\mathcal{X}
to typeset $\mathcal{X}$, not\chi
.By definition, $$H(X \mid \hat{X}, E=1) = - \sum_X \sum_{\hat{X}} p(X, \hat{X} \mid E=1) \log_2 (p(X \mid \hat{X}, E=1)).$$ Multiplying both sides by $P(E=1)$.