There are indefinite scalar product spaces. I suggest reading "Indefinite Linear Algebra and Applications" by Gohberg, Lancaster, Rodman. Applications are wide; to name a few: theory of relativity and the research of polarized light (mostly Minkowski space is used here), and matrix polynomials (nicely covered in "Matrix polynomials", again by Gohberg, Lancaster, Rodman).
Each indefinite scalar product space is induced by a nonsingular Hermitian indefinite matrix $J$ by
$$[x, y] := \langle Jx, y \rangle = y^* J x,$$
although I've seen other variations as is usuall with this (for example, $x^* J y$).
The vectors which you've mentioned, $x \ne 0$, but $[x,x] = 0$, are usually called degenerate, although other names are used as well, for example neutral. In later terminology (I think used mostly by physicists), $x$ for which $[x,x] < 0$ is called negative, and if $[x,x] > 0$, $x$ is called positive.
The most common indefinite scalar product space is hypperbolic, induced by $J = \mathop{\rm diag}(j_1,\dots,j_n)$, where $j_k \in \{-1,1\}$. Minkowski space is usually defined by $j_1 = \pm 1$ and $j_k = \mp 1$ for $k > 1$, or $j_k = \pm 1$ for $k < n$ and $j_n = \mp 1$.
There are wider classes of scalar products on finite real and complex spaces (I think there was even some work on spaces of quaternions) than indefinite ones. For example, orthosymmetric products, for which $J$ need not be Hermitian, but $J^* = \tau J$ for some $\tau \in \mathbb{C}$ such that $|\tau| = 1$.
Another widely researched class are symplectic scalar products, induced by $J = \left[\begin{smallmatrix} & {\rm I}_n \\ -{\rm I}_n \end{smallmatrix}\right]$.
I suggest reading Tisseur, especially her "Structured Factorizations in Scalar Product Spaces, Higham, Sanja Singer (especially "Orthosymmetric block reflectors" with Saša Singer), Mehrmann, Mehl, maybe few of my own papers,...
EDIT: Oops, it's not true. In dimension $2$, consider the indefinite inner product
$$ \langle u, v \rangle = u_1 v_1 - u_2 v_2$$
The matrix $$A = \pmatrix{1 & -1\cr 1 & -1\cr}$$ is "self-adjoint" with respect to this, i.e.
$$ \langle u, A v \rangle = \langle A u, v \rangle = (u_1 - u_2)(v_1 - v_2)$$
but it is not diagonalizable: its eigenvalue $0$ has algebraic multiplicity $2$ but
geometric multiplicity $1$, its only eigenvectors being scalar multiples of $\pmatrix{1\cr 1\cr}$.
Best Answer
Here is an algebraic approach to adjoint operators. Let us strip away the existence of an inner product and instead take two vector spaces $V$ and $W$. Furthermore, let $V^*$ and $W^*$ be the linear duals of $V$ and $W$, that is, the collection of linear maps $V\to k$ and $W\to k$, where $k$ is the base field. If you're working over $\mathbb R$ or $\mathbb C$, or some other topological field, you might want to work with continuous linear maps between topological vector spaces.
Given a linear operator $A: V\to W$, we can define a dual map $A^*: W^* \to V^*$ by $(A^*(\phi))(v)=\phi(A(v))$. It is straight forward to verify that this gives a well defined linear map between the vector spaces. This dual map is the adjoint of $A$. For most sensible choices of dual topologies, this map should also be continuous.
The question is, how does this relate to what you are doing with inner products? Giving an inner product on $V$ is the same as giving an isomorphism between $V$ and $V^*$ as follows:
Given an inner product, $\langle x, y \rangle$, we can define an isomorphism $V\to V^*$ via $x\mapsto \langle x, - \rangle$. This will be an isomorphism by nondegeneracy. Similarly, given an isomorphism $\phi:V\to V^*$, we can define an inner product by $\langle x,y\rangle =\phi(x)(y)$. The "inner products" coming from isomorphisms will not in general be symmetric, and so they are better called bilinear forms, but we don't need to concern ourselves with this difference.
So let $\langle x,y \rangle$ be an inner product on $V$, and let $\varphi$ be the corresponding isomorphism $\varphi:V\to V^*$ defined above. Then given $A:V\to V$, we have a dual map $A^*:V^* \to V^*$. However, we can use our isomorphism to define a different dual map (also denoted $A^*$, but which we will denote by $A^{\dagger}$ to prevent confusion) by $A^{\dagger}(v)=\varphi^{-1}(A^*\phi(v))$. This is the adjoint that you are using.
Let us see why. In what follows, $x\in V, f\in V^*$. Note that $\langle x, \varphi^{-1} f \rangle = f(x)$ and so we have
$$ \langle Ax, \varphi^{-1}f \rangle = f(Ax)=(A^*f)(x)=\langle x, \varphi^{-1}(A^* f) \rangle $$
Now, let $y=\varphi^{-1}f$ so that $\varphi(y)=f$ Then we can rewrite the first and last terms of the above equality as
$$\langle Ax, y \rangle = \langle x, \varphi^{-1}(A^* \phi(y)) \rangle = \langle x, A^{\dagger}y \rangle $$