This is probably something five-year-old physicists know, but here goes: Is there a standard methodology for computing Fourier transforms of things like $\log |x|$? Wolfram Alpha will happily give an answer (involving a delta function), but actually trying to do this yourself (by parts) gives horribly divergent-looking terms (the question which actually came up had $x$ be a vector in $\mathbb{R}^3,$ where the divergent terms are even more horrible than in the one-dimensional case (I am referring to the technique of just cutting off the function at some large $R;$ there are obviously other techniques, like weighting the integrand by an exponential weight (so you are computing a combination of Fourier and Laplace transforms), then computing the analytic continuation at $0,$ but all these should give the same answer,and there should be a not-totally-ad-hoc way of doing this, one should think…
[Math] Fourier transforms of functions not in $L^2.$
fourier analysisharmonic-analysismp.mathematical-physics
Related Solutions
For $\mathbb{R}$. Suppose f is our compactly supported function and g(x) is its Fourier transform. Since f is compactly supported, $\hat{f} = g$ is the restriction to $\mathbb{R}$ of an entire function g(z) by the Paley-Wiener theorems. Since g is entire and vanishes on an open set, $g \equiv 0$. The proof of this last fact (weakening the assumption to vanishing on a set with an accumulation point) uses that $\mathbb{C}$ is connected which is of course directly related to $\mathbb{R}$ being connected.
I expect that you knew this proof, but maybe you accidentally overlooked where connectedness was used. Or more likely, this proof didn't explain what you had in mind and you want a more general proof for $\mathbb{R}^n$. I can't currently do that. Instead, I have another idea which focuses on a different aspect than connectedness, but seems to be related.
In connection with the analogous statement for polynomials. A polynomial can only have finitely many zeroes over a field is proved via a complexity argument using that infinity > finite. Analytic functions, i.e. the completion of polynomials over $\mathbb{C}$ can have infinitely many zeroes, but uncountably many zeroes implies the analytic function is identically 0. So it seems that a set that has a limit point is more complex (in terms of complexity) than a countable set. I'm thinking the complexity argument should be interpreted in terms of density in topology - no finite subset of a $\mathbb{N}$ is dense in the discrete topology or any open subset of the co-finite topology on $\mathbb{N}$. Similarly for $\mathbb{R}$ and $\mathbb{C}$.
I hope this is helpful. This is an interesting question and I'll think more about it.
Perhaps the phenomenon you are asking about is: why is the definition of a positive-definite function natural?
One answer is that positive-definite functions are exactly coefficients of group representations, in the following sense. If $\pi : \mathbb{R}\to U(H)$ is a unitary representation of $\mathbb{R}$ on some Hilbert space $H$, and $h\in H$ is a vector, then the function $$t\mapsto \langle \pi (t) h, h\rangle$$ is positive-definite. Conversely, given a positive-definite function $\phi$, there exists a Hilbert space $H$, a vector $h\in H$ and a unitary representation $\pi$ of $\mathbb{R}$ on $H$, for which $\phi(t)=\langle \pi(t)h,h\rangle$.
Indeed, the $n\times n$ matrix occurring in the definition of a positive definite function is nothing more than the Gramm matrix of inner products $\langle \pi (t_i) h, \pi (t_j) h\rangle$; and positivity of this matrix is just a reflection of the fact that the inner product of $H$, restricted to the linear span of $\pi(t_i)h: i=1,\dots,n$ is positive-definite.
The Fourier transform goes from the functions on the group to functions on the space of irreducible unitary representations of the group, and thus switches positivity and complete positivity.
Best Answer
The umbrella legitimization of many such Fourier transforms is as tempered _distributions_ (where the sense of "distribution" is not the probability sense, but in the sense of Laurent Schwartz). The various "regularization" tricks amount to approaching the given distribution in the "weak *-topology" on distributions, by more tractable functions. Fourier transform on tempered distributions is (provably) continuous, so we conclude that all these trick must yield the same outcome.
[Edit in response to comment:] The "how to compute" (once we know that any device succeeds) is non-trivial, insofar as it is not clear a-priori how explicit an outcome could be expected. The first volume of Gelfand-Graev-et-alia's "Generalized Functions" does many illuminating examples, mostly computed via meromorphic continuation.
The simplest family of examples is probably $|x|^s$. Here, the homogeneity and rotational symmetry, and the fact that Fourier transform respects these (in suitable senses), promise that the Fourier transform of $|x|^s$ on $\mathbb R^n$ is a constant multiple of $|x|^{-n-s}$, for $-n<\Re(s)<0$ to assure local integrability (of both). The constant multiple is determined (for example) by integrating against Gaussians.
Then use the fact that the derivative of $|x|^s$ in $s$ multiplies it by $\log|x|$, and set $s=0$. This is the nice way logarithms can arise. The implicit claim that we can do complex analysis with distribution-valued functions was legitimized by Schwartz, and is pervasive in Gelfand-et-alia.
Products of $|x|^s$ by harmonic polynomials can be treated almost identically, using the repn theory of the orthogonal group on harmonic polynomials.
That is, very often, some sort of _unique_characterization_ of the tempered distribution, and of its image under Fourier Transform, reduce the computation to determination of the relevant constant!
Edit: oops, as Bazin notes, the exponent is not $n-s$ but $-n-s$, and adjust the local integrability assertion. (Adjusted above.)