No Model
Wayne points out that you do not say anything about the process generating the data. In particular, you don't say whether the quantity you are tracking, the 'state' is fixed or moving. If it's fixed - the case Wayne considers - then you may as well keep a running average of all the observations and hope for the best because that's all a full-blown state space model with state estimated by KF would do for you anyway. If it's moving then you need to ask yourself how it moves. Is it a random walk? Is it constantly increasing? Does it have cyclical or other recurring structure? You need those assumptions to define the state space model for which the KF supplies state estimates.
Off by 2
When you say 'let us say that measurements can be off by 2 points' you may think you are making things easier, either to explain or implement, but you aren't. If we take the 'off by 2' idea literally, then Kalman filtering cannot do what you want (although I suppose it might possibly approximate it). This is because the KF assumes your observations are conditionally Normally distributed. Your measurement error assumption would instead be Uniform. This will lead to incorrect inferences about the state if you apply the KF directly.
Thinking About the Problem Statistically
You ask 'how do I partition the data such that I get only distinct measurements'. That's a good question if all your data are either good measurements or bad ones. However, when considering KF we assume rather that all measurements have some error, about which we have a small theory - the state space model - that contains a sub-model of this error and another sub-model of the evolution of a underlying state that generates the measurements. The KF makes inferences on the basis of this theory. Consequently, in this framework you don't privilege any measurements but rather look to the estimated state for your answers.
Suggestion
If you don't feel like or simply cannot specify as much detail as necessary for a complete state space model to which you can apply a KF, it might be better to back off to more 'empirical' (and easier to implement) methods based on smoothing and weighted averages of recent data. Exponential smoothing might be a helpful place to start, e.g. as described fairly clearly here: http://www.duke.edu/~rnau/411avg.htm This approach has quite close connections to KF approaches, so you could return to them easily if necessary.
The simplest algorithm is the median filter. You can find an C++ implementation in
the R package robfilter. That implementation also include an 'online' version
that only uses past data and implements some algorithmic short-cuts.
Of course you will still have to set the "width" argument yourself, but this is the counter part of looking for a simple algorithm (this package also contains more sophisticated smoothing algorithms).
The median-filter is essentially a rolling window median, so it inherits the good
behaviour of the median in terms of insensitivity to outliers and non-parametric
interpret-ability.
So, considering the dataset you posted, the median filter would yield:
and the code:
a1<-read.table("sodat.txt",header=TRUE)
library("robfilter")
d1<-med.filter(a1[,2],width=10,online=TRUE)
plot(d1)
Best Answer
Yes. In fact, this is how the Kalman Filter (KF) is also set up, at least implicitly. The assumptions in place when choosing the KF model, are that the movements and measurements compose a linear dynamical system. The transition matrix, $F_{t-1}$, (in the equation: $\hat{x}_{t|t-1} = F_tx_{t-1} + ...$, where $\hat{x}$ is the predicted state estimate) is in fact indexed by time, so irregular observations shouldn't be an issue.
For a more mathematically rigorous explanation of the KF, Max Welling has a really good tutorial that I highly recommend.