Quantum Mechanics – Mathematical Explanation of Bra-Ket Notation

hilbert-spacelinear algebraquantum mechanicsquantum-computerquantum-entanglement

$\newcommand{\hp}[1]{\hphantom{#1}}$

We have the entangled state of two pairs of qubits:

$$
|\psi \rangle =\frac{1}{2}|0011\rangle-\frac{1}{2}|0110\rangle-\frac{1}{2}|1001\rangle+\frac{1}{2}|1100\rangle
\tag{01}\label{01}
$$

Then unitary transformations $A$ and $B$ are applied to it:

\begin{align}
A & =\:\:\frac{1}{2}\:\,
\begin{bmatrix}
\hp{-}i & \hp{-}1 &\hp{-}1 & \hp{-}i\hp{..} \\
-i & \hp{-}1 & -1 & \hp{-}i\hp{..} \\
\hp{-}i & \hp{-}1 & -1 & -i\hp{..} \\
-i & \hp{-}1 &\hp{-}1 & -i\hp{..}
\end{bmatrix}
\tag{02a}\label{02a}\\
B & =\frac{1}{\sqrt{2}}
\begin{bmatrix}
\hp{-}1 & \,\hp{.}0 & \hp{-}0 & \hp{-}1\hp{-} \\
-1 & \,\hp{.}0 & \hp{-}0 & \hp{-}1\hp{-} \\
\hp{-} 0 & \,\hp{.}1 & \hp{-}1 & \hp{-}0\hp{-}\\
\hp{-} 0 & \,\hp{.}1 & -1 & \hp{-}0\hp{-}
\end{bmatrix}
\tag{02b}\label{02b}
\end{align}

\begin{align}
&(A \otimes B) |\psi \rangle =\\
&\frac{1}{2 \sqrt{2}} \left(|0000\rangle -|0010\rangle -|0101\rangle +|0111\rangle +|1001\rangle +|1011\rangle -|1100\rangle -|1110\rangle\right)
\tag{03}\label{03}
\end{align}

I am not educated that deep in quantum mechanics so I need explanation how the last expression was achieved.

I can compute $A \otimes B$ whether it is tensor product or Kronecker product (I am not sure which of the two).

But then how the result of the product is plied to $|\psi \rangle$ is not clear to me. I need to know what math is applied.

You can see the source of the problem in this document Quantum Pseudo-Telepathy on page 22.

Here is computed product $A \otimes B$ in "Kronecker product form" and in "tensor product form" if that helps.

$$A \otimes B=\frac{1}{2 \sqrt{2}}\left(\tiny{
\begin{array}{cccccccccccccccc}
i & 0 & 0 & i & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & i & 0 & 0 & i \\
-i & 0 & 0 & i & -1 & 0 & 0 & 1 & -1 & 0 & 0 & 1 & -i & 0 & 0 & i \\
0 & i & i & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & i & i & 0 \\
0 & i & -i & 0 & 0 & 1 & -1 & 0 & 0 & 1 & -1 & 0 & 0 & i & -i & 0 \\
-i & 0 & 0 & -i & 1 & 0 & 0 & 1 & -1 & 0 & 0 & -1 & i & 0 & 0 & i \\
i & 0 & 0 & -i & -1 & 0 & 0 & 1 & 1 & 0 & 0 & -1 & -i & 0 & 0 & i \\
0 & -i & -i & 0 & 0 & 1 & 1 & 0 & 0 & -1 & -1 & 0 & 0 & i & i & 0 \\
0 & -i & i & 0 & 0 & 1 & -1 & 0 & 0 & -1 & 1 & 0 & 0 & i & -i & 0 \\
i & 0 & 0 & i & 1 & 0 & 0 & 1 & -1 & 0 & 0 & -1 & -i & 0 & 0 & -i \\
-i & 0 & 0 & i & -1 & 0 & 0 & 1 & 1 & 0 & 0 & -1 & i & 0 & 0 & -i \\
0 & i & i & 0 & 0 & 1 & 1 & 0 & 0 & -1 & -1 & 0 & 0 & -i & -i & 0 \\
0 & i & -i & 0 & 0 & 1 & -1 & 0 & 0 & -1 & 1 & 0 & 0 & -i & i & 0 \\
-i & 0 & 0 & -i & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & -i & 0 & 0 & -i \\
i & 0 & 0 & -i & -1 & 0 & 0 & 1 & -1 & 0 & 0 & 1 & i & 0 & 0 & -i \\
0 & -i & -i & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & -i & -i & 0 \\
0 & -i & i & 0 & 0 & 1 & -1 & 0 & 0 & 1 & -1 & 0 & 0 & -i & i & 0 \\
\end{array}}
\right)$$

$$A \otimes B=\frac{1}{2 \sqrt{2}}\left(\tiny{
\begin{array}{cccc}
\left(
\begin{array}{cccc}
i & 0 & 0 & i \\
-i & 0 & 0 & i \\
0 & i & i & 0 \\
0 & i & -i & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
1 & 0 & 0 & 1 \\
-1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
0 & 1 & -1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
1 & 0 & 0 & 1 \\
-1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
0 & 1 & -1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
i & 0 & 0 & i \\
-i & 0 & 0 & i \\
0 & i & i & 0 \\
0 & i & -i & 0 \\
\end{array}
\right) \\
\left(
\begin{array}{cccc}
-i & 0 & 0 & -i \\
i & 0 & 0 & -i \\
0 & -i & -i & 0 \\
0 & -i & i & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
1 & 0 & 0 & 1 \\
-1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
0 & 1 & -1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
-1 & 0 & 0 & -1 \\
1 & 0 & 0 & -1 \\
0 & -1 & -1 & 0 \\
0 & -1 & 1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
i & 0 & 0 & i \\
-i & 0 & 0 & i \\
0 & i & i & 0 \\
0 & i & -i & 0 \\
\end{array}
\right) \\
\left(
\begin{array}{cccc}
i & 0 & 0 & i \\
-i & 0 & 0 & i \\
0 & i & i & 0 \\
0 & i & -i & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
1 & 0 & 0 & 1 \\
-1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
0 & 1 & -1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
-1 & 0 & 0 & -1 \\
1 & 0 & 0 & -1 \\
0 & -1 & -1 & 0 \\
0 & -1 & 1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
-i & 0 & 0 & -i \\
i & 0 & 0 & -i \\
0 & -i & -i & 0 \\
0 & -i & i & 0 \\
\end{array}
\right) \\
\left(
\begin{array}{cccc}
-i & 0 & 0 & -i \\
i & 0 & 0 & -i \\
0 & -i & -i & 0 \\
0 & -i & i & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
1 & 0 & 0 & 1 \\
-1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
0 & 1 & -1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
1 & 0 & 0 & 1 \\
-1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
0 & 1 & -1 & 0 \\
\end{array}
\right) & \left(
\begin{array}{cccc}
-i & 0 & 0 & -i \\
i & 0 & 0 & -i \\
0 & -i & -i & 0 \\
0 & -i & i & 0 \\
\end{array}
\right) \\
\end{array}}
\right)$$

Best Answer

The bra vector can be treated as a row vector, while the ket vector is a column vector. So, if a vector space has $N$ dimensions, the row vectors for a basis would be $\langle 0|, \langle 1|, ⋯, \langle N-1|$, while the corresponding column vectors for the basis would be $|0\rangle , |1\rangle , ⋯, |N-1\rangle $.

Vectors in the tensor product $H⊗H'$ of two vector spaces $H$ and $H'$ can be punned as vectors in a larger vector space, such that if $u ∈ H$ and $u' ∈ H'$, then the corresponding column vectors would be denoted by concatenation as $|u\rangle ⊗|u'\rangle = |uu'\rangle $ and the dual row vector as $\langle u'|⊗\langle u| = \langle u'u|$. The reversal of order extends the algebra consistently such that the adjoint $A^†$ remains an involution operation (i.e. $A^{††} = A$ and $(AB)^† = B^† A^†$) - applicable now to all matrices, both square and non-square. Correspondingly, $|u\rangle ^† = \langle u|$, $\langle u|^† = |u\rangle $, similarly for $\langle u'|$ and $|u'\rangle $, and $$|uu'\rangle ^† = (|u\rangle ⊗|u'\rangle )^† = |u'\rangle ^†⊗|u\rangle ^† = \langle u'|⊗\langle u| = \langle u'u|.$$

Matrices, algebraically, are all sums of ket-bra combinations, e.g. $|0\rangle \langle 0|$ is the unit matrix whose only non-zero element is 1 at the upper left corner. So, the tensor product of matrices can be defined in such a way that $$(A⊗A')|u\rangle ⊗|u'\rangle = A|u\rangle ⊗A'|u'\rangle = |(Au)(A'u')\rangle .$$ Thus, if $A = |v\rangle \langle w|$ and $A' = |v'\rangle \langle w'|$, then $$(A⊗A')|u\rangle ⊗|u'\rangle = (|v\rangle \langle w||u\rangle )⊗(|v'\rangle \langle w'||u'\rangle ) = |vv'\rangle \langle w|u\rangle \langle w'|u'\rangle .$$ Algebraically, if we can treat the product sequentially, then $$\langle w'w| |uu'\rangle = \langle w'|\langle w||u\rangle |u'\rangle = \langle w|u\rangle \langle w'||u'\rangle = \langle w|u\rangle \langle w'|u'\rangle .$$ Hence, with that convention, we have: $$(A⊗A')|uu'\rangle = |vv'\rangle \langle w|u\rangle \langle w'|u'\rangle = |v\rangle |v'\rangle \langle w'|\langle w| |uu'\rangle = |v\rangle A'\langle w| |uu'\rangle .$$ Thus, the tensor product for matrices can be defined in terms of the bras and kets by: $$|v\rangle \langle w|⊗A' = |v\rangle A'\langle w|.$$

For qubits, the vector space has 2 basis elements. For brevity, we will define the following: $$b = \langle 0|, \hspace 1em p = \langle 1|, \hspace 1em d = |0\rangle , \hspace 1em q = |1\rangle ,$$ as well as the corresponding tensor products - with their renumbering as elements $0, 1, 2, 3$: $$ bb = \langle 00| = \langle 0|, \hspace 1em pb = \langle 10| = \langle 1|, \hspace 1em bp = \langle 01| = \langle 2|, \hspace 1em pp = \langle 11| = \langle 3|,\\ dd = |00\rangle = |0\rangle , \hspace 1em dq = |01\rangle = |1\rangle , \hspace 1em qd = |10\rangle = |2\rangle , \hspace 1em qq = |11\rangle = |3\rangle . $$ So, the tensor product of two 1-qubit spaces serves as a 4-dimensional vector space with 4 basis elements.

If we adopt the axioms: $$bd = 1 = pq, \hspace 1em bq = 0 = pd, \hspace 1em db + qp = 1$$ then all the matrix operations can be represented. In fact, the associative linear algebra generated from the 4 elements $\{b,d,p,q\}$, given, in the abstract, by these relations, is universal. It contains every finite dimensional real (and complex) vector and matrix algebra within it! So, it's not just for qubits, it can be used for everything.

Footnote: this applies generally to semi-rings, as well. In particular, in Computer Science, if you attach this algebra to the algebra of regular expressions, the result is an algebra for context-free expressions molded almost directly on the Chomsky-Schützenberger Theorem. So, in effect, Chomsky and Schützenberger are the Faraday to our Maxwell for this new paradigm, though we haven't gotten around (quite yet) to writing our "Maxwell's Treatise" for context-free expressions. Consider this advanced notice.

From these relations, in fact, you can write $$ \langle i| |j\rangle = δ^i_j, \hspace 1em (i, j = 0, 1, 2, 3), \\ |0\rangle \langle 0| + |1\rangle \langle 1| + |2\rangle \langle 2| + |3\rangle \langle 3| = 1 $$ where the Kroenecker delta is defined by $δ^i_j = 1$ if $i = j$ and $δ^i_j = 0$ if $i ≠ j$, for $i, j = 0, 1, 2, 3$.

The last of these relations demonstrated as follows: $$ddbb + dqpb + qdbp + qqpp = d(db + qp)b + q(db + qp)p = d1b + q1p = db + qp = 1.$$

With these conventions in place, we can write: $$\begin{align} ψ &= ½ (ddqq - dqqd - qddq + qqdd),\\ A &= ½ i(ddbb + ddpp - dqbb + dqpp + qdbb - qdpp - qqbb - qqpp) \\ &+ ½(ddpb + ddbp + dqpb - dqbp + qdpb - qdbp + qqpb + qqbp),\\ B &= √½ (ddbb + ddpp - dqbb + dqpp + qdpb + qdbp + qqpb - qqbp). \end{align}$$ According to your account, which is inherited from the account given in your reference, we should have: $$\begin{align} A⊗B ψ &= √⅛ (dddd - ddqd - dqdq + dqqq + qddq + qdqq - qqdd - qqqd) \\ &= √⅛ ((dd - qq)dd - (dd + qq)qd + (qd - dq)dq + (dq + qd)qq). \end{align}$$ We won't, by the way. It's wrong. They made a mistake, and you can already see that clearly: there are no factors of $i$ in their product, even though they appear in the matrix $A$.

We have the following reductions: $$\begin{align} A &= ½ (i(d + q)(d - q)bb + (d + q)(d + q)pb + i(d - q)(d + q)pp + (d - q)(d - q)bp), \\ B &= √½ (d(d - q)bb + q(d + q)pb + d(d + q)pp + q(d - q)bp). \end{align}$$ We can multiply the matrices one at a time, by using the conventions $$Aψ = A⊗Iψ, \hspace 1em Bψ = I⊗Bψ$$ noting that the general identity applies to matrix tensor products: $$(A⊗B)(A'⊗B') = AA'⊗BB'$$ so that we can actually write the product $A⊗B$ in either of two orders: $$(A⊗I)(I⊗B) = A⊗B = (I⊗B)(A⊗I).$$

Thus, applying $B$ first, we get: $$Bψ = ½√½ (ddd(d + q) - dqq(d - q) - qdq(d + q) + qqd(d - q))$$ For instance, showing how this applies to a given term, we have: $$ B ddqq = I⊗B dd⊗qq = (Idd)⊗(Bqq), \\ Bqq = √½ (d(d - q)bbqq + q(d + q)pbqq + d(d + q)ppqq + q(d - q)bpqq) = √½ d(d + q),$$ since $bbqq = 0$, $pbqq = 0$, $bpqq = 0$ all cancel, while $ppqq = 1$.

Applying $A$ next, to this result, we get, with a little algebra: $$\begin{align} A⊗Bψ &= ¼√½ i(d + q)(d - q)d(d + q) - ¼√½ (d + q)(d + q)q(d - q) \\ & - ¼√½ (d - q)(d - q)q(d + q) + ¼√½ i(d - q)(d + q)d(d - q) \\ & = ¼√½ i((d + q)(d - q) + (d - q)(d + q))dd \\ & - ¼√½ ((d + q)(d + q) + (d - q)(d - q))qd \\ & + ¼√½ i((d + q)(d - q) - (d - q)(d + q))dq \\ & + ¼√½ ((d + q)(d + q) - (d - q)(d - q))qq \\ & = ½√½ (i(dd - qq)dd - (dd + qq)qd + i(qd - dq)dq + (dq + qd)qq). \end{align}$$ You and your reference are both missing the factors of $i$ on the terms containing the kets $dddd = |0000\rangle $, $qqdd = |1100\rangle $, $qdqq = |1011\rangle $ and $dqdq = |0101\rangle $.

I suspect that that incongruity was what you were really asking about, wasn't it? Remember: the reference is only a pre-print. The answer is that they made a mistake.

Related Question