Solved – Particle filter in Matlab – what is going wrong

correlationMATLABmonte carloparticle filter

I posted this question on Electronics.Stackexchange and someone told me I'll be better off posting it here.

Its an implementation of the Particle Filter using MATLAB but the results never follow the observations. I changed the weighting function to be Gaussian but still no avail.

NOTE: I haven't taken into account that the process/measurement noise is time-correlated. Will this significantly change the accuracy of my program? If so, how do I correct for it?

MY OLD QUESTION

I'm working on a particle filter experiment for multi-sensor fusion and I just programmed it in MATLAB. However, I get very low accuracies for my final values. Plus, I read a lot of literature where they talk about pdf of state and observations etc. but my practical knowledge is still extremely shaky, since I've had no formal training in filtering/Bayesian estimates etc.

I have devised my algorithm like this:

  1. Initialize particles = I'm doing it as a Gaussian distribution – 10 particles

  2. Move the 10 particles forward using the state transition equation:

\begin{equation}
X_{t+1} = A \times X_t + 0.1 \times rand()
\end{equation}
I'm only injecting Gaussian noise so far.

  1. Using the observation, calculate the weights for the particles. I do this like a root mean square of the difference between predicted state and observation. For example, if my azimuth(a) = 40, pitch(p) = 3, roll(r)=4 in my state and in my observation it is a = 39, p = 3, r = 3, then I do rms = sqrt((40-39)^2 + (3-3)^2 + (4-3)^2). Then my weight is assigned as 1/rms in order for it to be inversely proportional to the 'distance' between the prediction and observation

  2. Then I normalize these weights to get norm_weight = weight/norm(weight) so that their sum is equal to one.

  3. Then I continue forward for all the observations. I have not included resampling yet because when I run this experiment, I do not experience any degeneracy, which is also very puzzling.

Where am I going wrong? I realized that I haven't 'computed' a lot of the Bayesian equations given in the literature i.e.

\begin{equation}
p(x|z_t) = p(z_t|x)\times p(x)/p(z)
\end{equation}
and I don't know where it fits in here either. Can somebody please help me?

My Matlab code looks like this:

function resultx = particlefilter(resultx_1, observationx, A, noiseP)
  for j = 1:length(observationx)
    for i = 1:length(resultx_1)
      apriori_state{i} = A*resultx_1{i} + noiseP;
      rms(i) = sqrt((observationx{j}(1) - apriori_state{i}(1))^2 +
                     (observationx{j}(2) - apriori_state{i}(2))^2);
      weight(i) = 1/rms(i);
    end;
    norm_weight = weight/norm(weight);
    for i = 1:length(apriori_state)
      plot(apriori_state{i});
    end
    disp(rms);
    disp(norm_weight);
  end

Best Answer

The problem may stem from the fitness ratio you use in your code. The inverse of the distances may give very large differences in the resampling probabilities of the new generation of the samples. For example, let x stand for the rms values in your design and let them be

x = [0.1 1 1 3 10 50]

When you use the inverses, that leads to

y = (1./x) ./ norm(1./x)
y = 
    0.9896    0.0990    0.0990    0.0330    0.0099    0.0020

As you see, the first sample dominates the next generation. It may converge to wrong proposals very easily. Alternatively, you may use a sigmoid-like function. For example,

z = exp(-x) ./ norm(exp(-x))
z = 
    0.8659    0.3521    0.3521    0.0476    0.0000    0.0000

Now, the particles in the next generation will probably involve some samples similar to the second and third ones. So, the generation scheme will be more robust to erronous proposals.

Edit: Moreover, after dividing by the norms, the sum of the result is not 1. You might use sum instead.

y = (1./x) ./ sum(1./x)
y = 
    0.8030    0.0803    0.0803    0.0268    0.0080    0.0016

z = exp(-x) ./ sum(exp(-x))
z =
    0.5353    0.2176    0.2176    0.0295    0.0000    0.0000
Related Question