Your question is at the core of not only signal processing but differential equations and orthogonal special functions, fields of study that have a long history and are still active and evolving, so it's a daunting task to point out where you could start your studies.
The Wiki leonbloy pointed out, Generalized Fourier Series, and also the Wiki Green's Function with the section on eigenvalue expansions introduce the jargon that you should be thoroughly familiar with.
The basic algorithm is to find dual sets of eigenvectors/eigenfunctions parametrized by a continuous (e.g., $\omega$ below) or discrete index (e.g., $n$ below), that satisfy completeness and orthogonality relations encapsulated in Dirac delta function resolutions such as that for the Fourier transform
$$\delta(x-y)= \int_{-\infty}^{\infty}\exp(i2\pi \omega x)\exp(i2\pi \omega y)d\omega$$
giving
$$\int_{-\infty}^{\infty}f(y)\delta(x-y)dy=f(x)=\int_{-\infty}^{\infty}\exp(i2\pi \omega x)\int_{-\infty}^{\infty}f(y)\exp(i2\pi \omega y) dy d\omega$$
$$=\int_{-\infty}^{\infty}\exp(i2\pi \omega x)\hat{f}(\omega) d\omega,$$
or that for the eigenvectors of Sturm-Liouville differential operators over finite domains
$$\delta(x-y)=\sum_{n=0}^{\infty }\Psi_n(x)\Psi_n^*(y)$$
giving
$$f(x)=\sum_{n=0}^{\infty }\Psi_n(x)\int_{-\infty}^{\infty}f(y)\Psi_n^*(y) dy,$$
or Kronecker delta resolutions such as that for the associated Laguerre functions
$$\frac{(n+\alpha)!}{n!}\delta_{mn}=\int_{0}^{\infty}x^{\alpha}e^{-x}L_{n}^{\alpha}(x)L_{m}^{\alpha}(x)dx$$
giving
$$f(x)=\sum_{n=0}^{\infty }\frac{n!L_{n}^{\alpha}(x)}{(n+\alpha)!}\hat{f}_n$$
with
$$\hat{f}_n=\int_{0}^{\infty}x^{\alpha}e^{-x}L_{n}^{\alpha}(x)f(x)dx.$$
The Fourier Transform and Its Applications by R. Bracewell is a really good book for grasping the fundamentals of the FT and DFT, as well as G. Strang's Introduction to Applied Mathematics.
Methods of Applied Mathematics by F. Hildebrand and Principles and Techniques of Applied Mathematics by B. Friedman give good intros to Fredholm theory and Green's functions.
More advanced books on harmonic analysis, such as J. Partington's Interpolation, Identification, and Sampling might be the next leap if you are comfortable with complex analysis (e.g., fractional linear transformations) and other integral transforms such as the Laplace transform.
The discrete Fourier transform of a signal $\{x_j\}$ is given by a linear combination of the $x_j$'s with some factors of the form $e^{j \pi i /N}$ or something similar. This can be shortly written as
$$x_k=A_{ki} x_i$$
where $A_{ki}$ is the transformation matrix. It is invertible (this is why you also have the inverse transform).
Having understood that, you see that the Fourier transform is nothing more than a change of basis in the space $\mathbb{C}^N$ where $N$ is the signal length. Since any basis will be of size $N$, you see that in order to fully describe your signal you always need exactly $N$ complex numbers (or $2N$ real ones).
Therefore, if your signal is complex you will always need all the coefficients of the DFT. If your signal is purely real then the coefficients in the DFT are related by $x_k=x_{N-k}^*$ and you need only half of them.
Best Answer
Let $f$, $g$ be functions of a real variable and let $F(f)$ and $F(g)$ be their fourier transforms. Then the fourier transform is linear in the sense that, for complex numbers $a$ and $b$,
$$F(af + bg) = a F(f) + b F(g)$$
i.e. it has the same notion of linearity that you may be used to from linear algebra. This is not a quirk - it expresses the fact that functions form an infinite dimensional vector space, with addition and multiplication by a scalar defined in the obvious way:
$$(f+g)(x) = f(x) + g(x)$$ $$(af)(x) = a f(x)$$