Since the other (very nice) answers here have not satisfied you, let me take a different route. I will illustrate what is going on in a simpler situation, and hopefully it will shed light on your question by analogy. Also, since the algebraic formalism doesn't seem to be to your liking, we'll think very geometrically.
Let's consider a 1-dimensional subspace $S$ of $\mathbf{R}^3$. Such a subspace is simply a line through the origin. In fact, for concreteness, let's take $S$ to be the familiar $x$-axis in $\mathbf{R}^3$.
What are all the possible dimensions of orthogonal subspaces to $S$?
Well, the empty-set is orthogonal to $S$ for trivial reasons, and that provides the unique zero-dimensional example. This is also the case in your problem.
Next, we have one-dimensional subspaces which are orthogonal to $S$. These are lines through the origin spanned by vectors making a "right angle" with the $x$-axis. There are many such lines. The $y$-axis and the $z$-axis provide the most familiar examples. So $1$-dimensional orthogonal subspaces are also possible, there are lots of them, and if you take $S$ together with a one-dimensional orthogonal subspace, there's still plenty of $\mathbf{R}^3$ that remains "untouched".
Now, take any two distinct one-dimensional orthogonal subspaces from the previous example. These lines are spanned by two linearly independent vectors. Thus, taken together they span a $2$-dimensional subspace of $\mathbf{R}^3$, and since both of the basis vectors are orthogonal to $S$, the entire space is orthogonal to $S$. A $2$-dimensional subspace of $\mathbf{R}^3$ is a plane. This plane is going to be the $y-z$-plane, which is probably familiar to you from multivariable calculus. You can probably visualize how the $x-$axis "points out of the $y-z$-plane orthogonally".
But now we're done, because the next dimension to consider would be dimension $3$, but the only $3$-dimensional subspace of $\mathbf{R}^3$ is all of $\mathbf{R}^3$ itself, which would certainly overlap with $S$, and hence not be orthogonal to it.
So the possible dimensions for orthogonal subspaces were 0, 1, and 2. Now, a subspace and its orthogonal complement, taken together, "fill up" the entire ambient vector space. What we noticed in our example was that the $x-$axis and the $y-z$-plane are not only orthogonal, but taken together they fill up $\mathbf{R}^3$. So the orthogonal complement of our one-dimensional subspace of $\mathbf{R}^3$ was two-dimensional.
Now, you can play the same sort of game with the problem in your question...it just won't be so easy to visualize what's going on. If $S$ is six-dimensional in $\mathbf{R}^9$, then you can find orthogonal subspaces of dimensions 0, 1, 2, or 3, and if you take a 3-dimensional orthogonal subspace, then you've filled up all of $\mathbf{R}^9$, so you've found the orthogonal complement.
Let $\beta=\{w_1,w_2,\ldots,w_k\}$ and $\gamma=\{x_1,x_2,\ldots,x_m\}$ be the bases for $W$ and $W^\perp$, respectively. It suffices to show that
$$\beta\cup\gamma=\{w_1,w_2,\ldots,w_k,x_1,x_2,\ldots,x_m\}$$
is a basis for $V$.
Given $v\in V$, then it is well-known that $v=v_1+v_2$ for some $v_1\in W$ and $v_2\in W^\perp$. Also because $\beta$ and $\gamma$ are bases for $W$ and $W^\perp$, respectively, there exist scalars
$a_1,a_2,\ldots,a_k,b_1,b_2,\ldots,b_m$ such that
$v_1=\displaystyle\sum_{i=1}^ka_iw_i$ and $v_2=\displaystyle\sum_{j=1}^mb_jx_j$. Therefore
$$v=v_1+v_2=\sum_{i=1}^ka_iw_i+\sum_{j=1}^mb_jx_j,$$
which follows that $\beta\cup\gamma$ generates $V$. Next, we show that
$\beta\cup\gamma$ is linearly independent. Given
$c_1,c_2,\ldots,c_k,d_1,d_2,\ldots,d_m$ such that
$\displaystyle\sum_{i=1}^kc_iw_i+\sum_{j=1}^md_jx_j={\it 0}$, then
$\displaystyle\sum_{i=1}^kc_iw_i=-\sum_{j=1}^md_jx_j$. It follows that
$$\sum_{i=1}^kc_iw_i\in W\cap W^\perp\quad\mbox{and}\quad
\sum_{j=1}^md_jx_j\in W\cap W^\perp.$$
But since $W\cap W^\perp=\{{\it 0}\,\}$ (gievn $x\in W\cap W^\perp$,
we have $\langle x,x\rangle=0$ and thus $x={\it 0}\,$), we have
$\displaystyle\sum_{i=1}^kc_iw_i=\sum_{j=1}^md_jx_j={\it 0}$. Therefore
$c_i=0$ and $d_j=0$ for each $i,j$ becasue $\beta$ and $\gamma$ are bases
for $W$ and $W^\perp$, respectively. Hence we conclude that $\beta\cup\gamma$ is linearly independent.
Best Answer
Suppose $v\in A_1^{\perp}\cap A_2^{\perp}$. Then $v\cdot a_1=v \cdot a_2=0$ for all $a_1 \in A_1$ and all $a_2 \in A_2$. Then for all $a_1+a_2 \in A_1+A_2$, $v\cdot (a_1+a_2)=v\cdot a_1+v \cdot a_2=0+0=0$. Thus $v\in (A_1+A_2)^{\perp}$.
Conversely, if $v\not\in A_1^{\perp}\cap A_2^{\perp}$ then without loss of generality let $v\not\in A_1^{\perp}$. Then there is $a_1\in A_1$ with $v\cdot a_1 \neq 0$. We know $0\in A_2$, so $v\cdot (a_1 +0)=v\cdot a_1 \neq 0$. Thus $v \not \in v\in (A_1+A_2)^{\perp}$.