If I'm reading it correctly, your definition of cpd would include any kernel where $k(x, x) > 0$ for any point $x$, by just considering the subset $\{ x \}$.
One way to phrase a matrix being positive definite is that $c^T A c > 0$, for all vectors $c$; a kernel is positive definite if that holds for all Gram matrices $A$. The conditioning in cpd refers to reducing the "for all vectors $c$" part, not the "for all Gram matrices $A$" part. Specifically, the most common definition is reducing it to "for all vectors $c$ with $1^T c = 0$.
These are useful because, just as pd kernels correspond to inner products in some Hilbert space, cpd kernels are closely connected to distances in some Hilbert space. SVMs and kernel PCA both work with any cpd kernel, though a particular optimization algorithm might not.
Here's the full definition from Schölkopf and Smola's Learning with Kernels (2002). Here $\mathbb K$ refers to either $\mathbb R$ or $\mathbb C$.
Definition 2.20 (Conditionally Positive Definite Matrix) A symmetric $m \times m$ matrix $K$ ($m \ge 2$) taking values in $\mathbb K$ and satisfying
$$ \sum_{i, j = 1}^m c_i \bar{c}_j K_{ij} \ge 0 \text{ for all } c_i \in \mathbb K, \text{ with } \sum_{i=1}^m c_i = 0$$
is called conditionally positive definite (cpd).
Definition 2.21 (Conditionally Positive Definite Kernel) Let $\mathcal X$ be a nonempty set. A function $k : \mathcal X \times \mathcal X \to \mathbb K$ which for all $m \ge 2, x_1, \dots, x_m \in \mathcal X$ gives rise to a conditionally positive definite Gram matrix is called a conditionally positive definite (cpd) kernel.
I don't know a simple eigenvalue-based definition.
To prove a kernel is cpd, you can either prove it directly from the definition above (usually difficult), or you can try to build it out of other cpd kernels using various properties that preserve cpd-ness.
Here are some useful connections to pd kernels from that book:
Proposition 2.22 (Constructing PD Kernels from CPD Kernels [42, Lemma 2.1, p. 74]) Let $x_0 \in \mathcal X$, and let $k$ be a symmetric kernel on $\mathcal X \times \mathcal X$. Then
$$\tilde k(x, x') := \tfrac12 \left( k(x, x') - k(x, x_0) - k(x_0, x') + k(x_0, x_0) \right)$$
is positive definite if and only if $k$ is conditionally positive definite.
[42] is Berg, Christensen, and Ressel's Harmonic Analysis on Semigroups (1984).
Proposition 2.28 (Connection PD – CPD [465, Thm. 1, p. 527]) A kernel $k$ is conditionally positive definite if and only if $\exp(t k)$ is positive definite for all $t > 0$.
[465] is the classic paper Metric spaces and positive definite functions of Schoenberg (Transactions of the American Mathematical Society 44:522-536, 1938).
First, your definition should be corrected as
$$k(x, x') = \langle x, x\color{red}{'}\rangle = \sum_{a = 1}^N x_a x_a'. $$
The problem of your derivation is that you didn't distinguish $x_i = (x_{i,1}, \ldots, x_{i,N})^T$ and $x_j = (x_{j, 1}, \ldots, x_{j, N})^T$ very clearly.
Let's say you have $p$ vectors $\{x_1, \ldots, x_p\}$ under consideration. It follows that (what you provided was actually incorrect):
\begin{align}
& \sum_{i, j} c_i c_j k(x_i, x_j) \\
= & \sum_{i = 1}^p \sum_{j = 1}^p c_i c_j \sum_{a = 1}^N x_{i,a}x_{j, a} \\
= & \sum_{i = 1}^p \sum_{j = 1}^p \sum_{a = 1}^N c_i x_{i,a} c_j x_{j, a} \\
= & \sum_{a = 1}^N \left(\sum_{i = 1}^p c_i x_{i, a}\right) \left(\sum_{j = 1}^p c_j x_{j, a}\right) \qquad \text{ change the order of summation}\\
= & \sum_{a = 1}^N \left(\sum_{i = 1}^p c_i x_{i, a}\right)^2 \geq 0. \qquad i, j \text{ are just dummy indices}
\end{align}
Best Answer
A kernel is psd if and only if all Gram matrices are psd. Thus if you find an instance of a Gram matrix which is not psd, then the kernel is not psd; but finding a single psd Gram matrix does not prove that it is always psd.
So, yes, your first kernel is not psd. (Incidentally, I don't understand your notation $(x^T - t)^2$ at all.) In fact, you can find a simpler counterexample in this case: if $k(x, x) < 0$ for any $x$, then $k$ is not psd, since the Gram matrix of the set $\{ x \}$ is not psd.
For the second kernel: I again don't understand your notation $e^{(x_1, t_1)}$ at all.
If you somehow meant $e^{- (x_1 - t_1)^2}$, then yes, it is a valid kernel. This is because it's a valid kernel (the Gaussian) on one-dimensional inputs, and ignoring elements of your input vectors doesn't matter. That's because, if $k$ is a valid kernel on $\mathbb R$, then $k_d(x, y) = k(x_d, y_d)$ is valid on $\mathbb R^m$: if you have a set of points $\{x^{(a)}\}_{a=1}^n$ and associated weights $\alpha_a$, then $$ \sum_{i=1}^n \alpha_i k_d(x^{(i)}, x^{(j)}) \alpha_j = \sum_{i=1}^n \alpha_i k(x_d^{(i)}, x_d^{(j)}) \alpha_j \ge 0 $$ since $k$ is a valid kernel.
If you meant $e^{(x_1 - t_1)^2}$, then no, it's not a valid kernel, because the Gaussian kernel is not psd with a negative multiplier.
If you meant $e^{x_1 t_1}$, then yes, it is valid: $(x, t) \mapsto x_1 t_1$ is a valid kernel, and in general, if $k$ is a valid kernel then so is $e^k$ (proof).