This probability can be calculated exactly, and indeed it approaches $1/2$ rather quickly — more precisely, for each $p$ it approaches the fraction $(p-1)/(2p)$ of quadratic residues $\bmod p$. This can be proved by elementary means, but perhaps the nicest way to think about it is that if you choose $n$ numbers $a_i$ independently and sum $a_i^2 \bmod p$, the resulting distribution is the $n$-th convolution power of the distribution of a random single square — so its discrete Fourier transform is the $n$-th power of the D.F.T., call it $\gamma$, of the distribution of $a^2 \bmod p$. For this purpose $\gamma$ is normalized so $\gamma(0)=1$. Then for $k \neq 0$ we have $\gamma(k) = (k/p) \gamma(1)$ [where $(\cdot/p)$ is the Legendre symbol], and
$$
p \gamma(1) = \sum_{a \bmod p} \exp(2\pi i a^2/p),
$$
which is a Gauss sum and is thus a square root of $\pm p$. It follows that $|\gamma(k)| = p^{-1/2}$, from which we soon see that each value of the convolution approaches $1/p$ at the exponential rate $p^{-n/2}$, and the probability you asked for approaches $(p-1)/(2p)$ at the same rate.
As noted above, this result, and indeed the exact probability, can be obtained by elementary means, yielding a (known but not well-known) alternative proof of Quadratic Reciprocity(!). But that's probably too far afield for the present purpose.
Using the Newton identities, one can (in the high characteristic regime $p>k$) express the elementary symmetric polynomial $\sum_{i_1 < \dots < i_k} a_{i_1} \dots a_{i_k}$ in terms of the moments $\sum_{i=1}^k a_i^j$ for $j=1,\dots,k$. The question then boils down to the equidistribution of $\sum_{i=1}^k (a_i^j)_{j=1}^k$ in ${\mathbf F}_p^k$. There is however an obstruction to equidistribution because the points $(a^j)_{j=1}^k$ for $a=-2,-1,1,2$ do not span the entirety of ${\mathbf F}_p^k$ once $k \geq 4$, due to the existence of non-trivial polynomials $P$ of degree $4$ or higher that vanish at $-2,-1,1,2$. For instance because $P(x) = (x+2)(x+1)(x-1)(x-2) = x^4 - 5 x^2 + 4$ vanishes at these points, the vector $(m_j)_{j=1}^k := \sum_{i=1}^k (a_i^j)_{j=1}^k$ is surely constrained to the hyperplane $m_4 - 5 m_2 + 4k = 0$. This is going to create some distortions to the probability that $\sum_{i_1 < \dots < i_k} a_{i_1} \dots a_{i_k}$ vanishes mod $p$ that can be explicitly calculated for each $p,k$, but the formula is going to be messy (these errors will be of lower order in the limit $p \to \infty$ holding $k$ fixed though, by the Lang-Weil estimates).
(To put it another way, one can use the Newton identities to express the elementary symmetric polynomial as some polynoimal combination of the frequencies $d_{-2}, d_{-1}, d_1, d_2$ that count how often the random sequence $a_1,\dots,a_k$ attains each of its permitted values $-2, -1, 1, 2$. To count the probability that this polynomial vanishes mod $p$, one has to count the points in some variety over ${\bf F}_p$ which in general is a task for the Lang-Weil estimate.)
Your second sum seems to be expressible in terms of a symmetric polynomial in a suitable matrix algebra. If one had $i_k < j_k$ in place of the constraint $i_k \leq j_k$ then one just needs to take the order $2k$ elementary symmetric polynomial of the matrices $\begin{pmatrix} 0 & b_i \\ a_i & 0 \end{pmatrix}$ and extract the top left coefficient. With $i_k \leq j_k$ the situation is more complicated but I expect there is still some sort of matrix representation. However being non-abelian I doubt there is a reduction to the abelian elementary symmetric polynomial considered earlier, and given how complicated that formula already was I'm afraid it is not going to be fun to try to control this sum (except possibly in the asymptotic regime where $p$ is somewhat large and $k$ goes to infinity; actually the nonabelian case might be substantially more "mixing" than the abelian one and one could conceivably get better asymptotics by using some of the theory of expansion of Cayley graphs, but this looks like a lot of work...).
Best Answer
The condition $$\sum_{ 1 \leq i < j \leq n} a_ia_j \equiv 0 \pmod p$$ is equivalent to $$\left(\sum_{ 1 \leq i\leq n} a_i\right)^2 \equiv n \pmod p.$$ So a necessary condition is that $n$ is a quadratic residue modulo $p$ (including the zero residue). If $n$ is divisible by $p$, then the above condition says that the sum of the $a_i$'s is divisible by $p$. Otherwise, the condition says that the sum of the $a_i$'s is congruent to one of the two square-roots of $n$ modulo $p$. Now it is easy to see that the sum of the $a_i$'s is equidistributed modulo $p$ (think about what happens when an $a_i=1$ is switched to $a_i=-1$), hence in the first case the probability is $1/p+o(1)$, in the second case it is $2/p+o(1)$, as $n$ tends to infinity.
In fact the probabilities can be calculated explicitly as a linear combination of $n$-th powers of $p$ complex numbers (which only depend on $p$), since the sum of the $a_i$'s modulo $p$ is determined by $\#\{i:a_i=1\}$ modulo $p$, and vice versa. Compare with this post, where the role of $p$ is played by $4$. It follows, in particular, that the $o(1)$ terms above decay exponentially fast. For a more complete reference, see Theorems 8.7.2 & 8.7.3 in Wagner: A first course in enumerative combinatorics (AMS, 2020).