No, it is not true that a process W satisfying the properties (1), (3) and (4) has to be a Brownian motion. We can construct a counter-example as follows.
This construction is rather contrived, and I don't know if there's any simple examples.
Start with a standard Brownian motion W. The idea is to apply a small bump to its distribution while retaining the required properties. I will do this by first reducing it to the discrete-time case. So, choose a finite sequence of times 0 = t0 < t1 < ... < tn. Then define a piecewise linear process X by Xtk = Wtk (k = 0,1,...,n) and such that X is linearly interpolated across each of the intervals [tk-1,tk] and constant over [tn,∞).
Then, Y = W - X is a continuous process independent from X. In fact, Y is just a sequence of Brownian bridges across the intervals [tk-1,tk] and is a standard Brownian motion on [tn,∞). Also by linear interpolation, for any time t ≥ 0, Xt is a linear combination of at most two of the random variables Xt1,...,Xtn. The increments of W,
$$
W_t-W_s = X_t-X_s + Y_t-Y_s,
$$
are then a linear combination of at most 4 of the random variables Xt1,...,Xtn plus an independent term. So, choosing n ≥ 5, if it is possible to replace (Xt1,...,Xtn) by any other ℝn-valued random variable without changing the joint-distribution of any 4 elements, then the distributions of the increments Wt - Ws will be left unchanged. So, properties (1), (3), (4) will still be satisfied but the new process for W will not be a standard Brownian motion. It is possible to change the distribution in this way:
Let X = (X1,X2,...,Xn) be an ℝn-valued random variable with a continuous and strictly positive probability density pX: ℝn → ℝ. Then, there exists a random variable Y = (Y1,Y2,...,Yn) with a different distribution than X but for which the projection onto any n - 1 elements has the same distribution as for X.
That is, for any k1,k2,...,kn-1 in {1,...,n}, (Yk1,Yk2,...,Ykn-1) has the same distribution as (Xk1,Xk2,...,Xkn-1).
We can construct the probability density pY of Y by applying a bump to the probability distribution of X,
$$
p_Y(x)=p_X(x)+\epsilon f(x_1)f(x_2)\cdots f(x_n).
$$
Here, ε is a fixed real number and f: ℝ → ℝ is a continuous function of compact support and zero integral, $\int_{-\infty}^\infty f(x)\,dx=0$. Then, $\int_{-\infty}^\infty p_Y(x)\,dx_k=\int_{-\infty}^\infty p_X(x)\,dx_k$ for each k. So, the integral of pY over ℝn is 1 and, by choosing ε small, pY will be positive. Then it is a valid probability density function. Finally, as the integral along the kth direction (any k) agrees for pX and pY, the projection of X and Y onto ℝn-1 along the kth direction give the same distribution.
The integral equation is solved as it is in the case of brownian motion and brownian bridge.
The eigenfunctions are sine functions, and the tricky parts are the eigenvalues and the distribution of the random coefficients in the K-L expansion.
If g is the eigenfunction with eigenvalue \gamma, then \gamma*g = -lambda*g and g(0) = 0. If you substitute back into the integral equation for g then you get an equation for \gamma in terms of sine, cosine, and \lambda.
I have recently submitted a manuscript for publication giving the complete solution for
the eigenfunction/eigenvalue part of this problem and some generalizations.
Prof Eric Key
Dept of Math Sci
UW-Milwaukee.
Best Answer
You cannot define a Lévy process by the individual distributions of its increments, except in the trivial case of a deterministic process Xt − X0 = bt with constant b. In fact, you can't identify it by the n-dimensional marginals for any n.
Taking n = 2 will give a process whose increments have the same distribution as for X.
The idea (as in my answer to this related question) is to reduce it to the finite-time case. So, fix a set of times 0 = t0 < t1 < t2 < … < tm for some m > 1. We can look at the distribution of X conditioned on the ℝm-valued random variable U ≡ (Xt1,Xt2,…,Xtm). By the Markov property, it will consist of a set of independent processes on the intervals [tk−1,tk] and [tm,∞), where the distribution of {Xt }t ∈[tk−1,tk] only depends on (Xtk−1,Xtk) and the distribution of {Xt }t ∈[tm,∞) only depends on Xtm. By the disintegration theorem, the process X can be built by first constructing the random variable U, then constructing X to have the correct probabilities conditional on U. Doing this, the distribution of X at any one time only depends on the values of at most two elements of U (corresponding to Xtk−1,Xtk). The distribution of X at any set of n times depends on the values of at most 2n values of U.
Choosing m > 2n, the idea is to replace U by a differently distributed ℝm-valued random variable for which any 2n elements still have the same distribution as for U. We can apply a small bump to the distribution of U in such a way that the m − 1 dimensional marginals are unchanged. To do this, we can use the following.
By 'non-trivial' I mean that μk is a non-zero measure and does not consist of a single atom.
By changing the distribution of U in this way, we construct a new cadlag process with a different distribution to X, but with the same n dimensional marginals.
Proving (2) is easy enough. As μk are non-trivial, there will be measurable functions ƒk on the reals, uniformly bounded by 1 and such that μk(ƒk) = 0 and μk(|ƒk|) > 0. Replacing μk by the signed measure ƒk·μk, we can assume that μk(ℝ) = 0. Then $$ \mu_V = \mu + \mu_1\times\mu_2\times\cdots\times\mu_m $$ is a probability measure different from μ. Choosing V with this distribution gives $$ {\mathbb E}[f(V)]=\mu_V(f)=\mu(f)={\mathbb E}[f(U)] $$ for any function ƒ: ℝm → ℝ+ independent of one of the dimensions. So, V has the same m − 1 dimensional marginals as U.
To apply (2) to U = (Xt1,Xt2,…,Xtm), consider the following cases.
X is continuous. In this case, X is just a Brownian motion (up to multiplication by a constant and addition of a constant drift). So, U is joint-normal with nondegenerate covariance matrix. Its probability density is continuous and strictly positive so, in (2), we can take μk to be a multiple of the uniform measure on [0,1].
X is a Poisson process. In this case, we can take μk to be a multiple of the (discrete) uniform distribution on {2k,2k + 1} and, as X can take any increasing nonnegative integer-valued path on the times tk, this satisfies the hypothesis of (2).
If X is any non-continuous Lévy process, case 2 can be used to change the distribution of its jump times without affecting the n dimensional marginals: Let ν be its jump measure, and A be a Borel set such that ν(A) is finite and nonzero. Then, X decomposes as the sum of its jumps in A (which occur according to a Poisson process of rate ν(A)) and an independent Lévy process. In this way, we can reduce to the case where X is a Lévy process whose jumps occur at a finite rate, with arrival times given by a Poisson process. In that case, let Nt be the Poisson process counting the number of jumps in intervals [0,t]. Also, let Zk be the k'th jump of X. Then, N and the Zk are all independent and, $$ X_t=\sum_{k=1}^{N_t}Z_k. $$ As above, the Poisson process N can be replaced by a differently distributed cadlag process which has the same n dimensional marginals. This will not affect the n dimensional marginals of X but, as its jump times no longer occur according to a Poisson process, X will no longer be a Lévy process.