The discrete Fourier transform and its inverse do not include any specifications of the continuous-time signal (and its Fourier transform) from which the sample values are obtained. The vector $(x(0), x(1), \ldots, x(N-1))$ can represent
sample values spaced $T$ seconds apart for your choice of $T$ and thus
the $y(t)$ that you seek cannot be determined until you specify $T$.
In your example, you chose $N=11$, $T = 1.5$ seconds and so you know the values of
$y(t)$ for $t = 0, 1.5, 3.0, \ldots, 15.0$.
Your signal $y(t)$ is unknown except for its values at $11$ points on the
time axis: $y(1.5n) = x(n), 0 \leq n < 11$.
Anything more that could be said about $y(t)$ depends on what
assumptions you are willing to make. One common assumption is
that (for the case when $N$ is odd)
$y(t)$ can be represented on the interval $[0, NT] = [0,16.5]$
by the finite Fourier series
$$y(t) = \sum_{k=-(N-1)/2}^{(N-1)/2}
Y_k \exp(i 2\pi f_0 kt) $$
where $f_0 = 1/NT$ Hz.
Note that this means that $y(t)$ has no frequencies higher than
$((N-1)/2)f_0 < Nf_0/2 = (2T)^{-1}$, that is, half the sampling
rate $T^{-1}$.
Now, the Fourier coefficients cannot be computed in the
usual manner as
$Y_k = \frac{1}{16.5}\int_{0}^{16.5}y(t)\exp(-i2\pi kf_0 t)
\,\mathrm dt$
because $y(t)$ is not known except at the sample points.
But, when $N = 11$,
$$\begin{align*}
y(nT) &= \sum_{k=-5}^{5}
Y_k \exp(i2\pi kf_0 nT) = \sum_{k=-5}^{5}
Y_k \exp(i 2\pi kn/11)\\
&= \sum_{k=0}^{5} Y_k \exp(i 2\pi kn/11)
+ \sum_{k=6}^{10} Y_{k-11} \exp(i 2\pi kn/11)\\
\text{But,}\qquad \qquad
y(nT) &= x(n)\\
&= \frac{1}{N} \sum_{k=0}^{11} X_k \exp(i 2\pi kn/11)
\end{align*}$$
and so we have that
$Y_k = \begin{cases}\frac{1}{N}X_k, & 0 \leq k \leq 5,\\
\frac{1}{N} X_{k+11}, & -5 \leq k < 0.\end{cases}$
So the Fourier coefficients can be determined from the
known values of the discrete Fourier transform. Note that
I have discussed the case when $N$ is odd. When $N$ is
even, a slightly different calculation is used to
account for the relationship between $Y_{\pm N/2}$ and
$X_{N/2}$.
The exponential function $e^{zt}$ is the eigenfunction of the differentiation operator, with eigenvalue $z$. The Fourier Transform essentially expresses a function as a linear combination of $e^{i\omega t}$ basis functions, so differentiating this linear combination is simply multiplication by the eigenvalue $i\omega$. This is analogous to applying a linear operator to a linear combination of basis vectors - the result is entirely determined by the operator's mapping of the basis vectors.
Best Answer
Indeed, the fact that this is a Fourier transform is by and large a mathematical coincidence; the intuition comes not from interpreting it as a Fourier transform, but by considering it from another angle, that of moment generating functions.
Throughout this answer, I assume all random variables are real-valued; it seems like that's what you're concerned about anyway.
If you have done some statistics, you are almost certainly familiar with the concept of the moment generating function of $X$, $$ M_X : \mathbb R \to \mathbb R \\ M_X(t) = \mathbb E\big[e^{tX}\big]. $$ This function has many nice properties. For instance, the $n$-th moment of $X$, $\mathbb E\big[X^n\big]$, can be found by computing $M_X^{(n)}(0)$, the $n$-th derivative of $M_X$ evaluated at $0$. Another important application is the fact that two random variables with the same moment generating function have the same distribution; that is to say, the process of determining a moment generating function is "invertible". A third and also significant application is the fact that, for any two independent random variables $X$ and $Y$, we have \begin{align*} M_{X+Y}(t) &= \mathbb E \big[e^{t(X+Y)}\big] \\ &= \mathbb E \big[e^{tX} e^{tY}\big] \\ &= \mathbb E \big[e^{tX} \big] \mathbb E \big[e^{tY} \big] \\ &= M_X(t)M_Y(t). \end{align*} (In a somewhat informal sense the third equality follows by considering $e^{tX}$ and $e^{tY}$ as independent random variables.) In conjunction with the fact that moment generating functions are invertible, this essentially permits us to derive a formula for the distribution of the sum of two independent random variables; hopefully, this application also makes clear why there is a seemingly arbitrary exponential in the definition of the moment generating function.
Now, the classical example of an application of moment generating functions is in the proof of the Central Limit Theorem. They are a natural candidate, because CLT involves the sums of independent random variables, and moment generating functions are well-equipped to deal with such matters. However, there is a glaring issue with their use: moment generating functions do not always exist. In particular, a random variable with infinite mean will not have a convergent moment generating function for any $t$ other than $0$.
This is where characteristic functions come in. As you know, we define a characteristic function by $$ \varphi_X : \mathbb R \to \mathbb C \\ \varphi_X(t) = \mathbb E \big[ e^{itX} \big]. $$ All of the nice properties that applied for moment generating functions mentioned above still apply for characteristic functions. In particular:
the $n$-th moment of $X$ can be found as $(-i)^{(n)} \varphi_X^{(n)}(0)$, if it exists
two random variables with the same characteristic function have the same distribution
$\varphi_{X+Y}(t) = \varphi_X(t)\varphi_Y(t)$ for independent r.v.s $X$, $Y$ (this is proven essentially the same way as before).
The critical difference with moment generating functions is this: characteristic functions always exist, at least of real-valued random variables. The intuitive reason that characteristic functions will always exist is that the possible values taken by $e^{itX}$ all lie on the unit circle, hence are bounded, and so intuitively the integral defining the expected value will take a finite value somewhere within the unit circle. Going back to the CLT example, this then allows us to complete our proof without issue; indeed, if you are interested, the proof on the Wikipedia page uses characteristic functions.
Based on this little narrative, it is pretty clear that the entire motivation for the introduction of $i$ in the exponent of the characteristic function is the fact that convergence will be guaranteed for a real-valued random variable. It is not much more than a nice mathematical coincidence that the characteristic function coincides with the Fourier transform, and it makes little sense (at least in my opinion) to try and carry over intuitions from the Fourier transform to the characteristic function; instead, the intuition can be seen by thinking about how this function might have been discovered in the first place.