Your suspicion is totally wrong about 0. Although I have no idea what you actually did. As I show below, you never needed to use a tolerance at all. Regardless, I cannot reproduce the behavior you claim to see.
Consolidator is a code fundamentally designed to remove duplicate elements (x), or near duplicates, reducing the corresponding values (y) by some function.
Be careful though. By using any tolerance at all, IF the numbers in x were really as you say, i.e., whole numbers, then consolidator should have been fine.
For example, first of all, you never needed to use any tolerance. They are exact integers, so you claim.
x = randi([0,23],1000,1);
y = rand(1000,1);
[x0,y0] = consolidator(x,y,@nanmean);
x0
x0 =
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
numel(x0)
ans =
24
So it works fine with no tolerance at all. By putting a tolerance of exactly 1, which also happens to be the exact stride, there is some floating point risk that it will consider the numbers to be within the tolerance.
But a quick test can verify that fact or not.
[x0,y0] = consolidator(x,y,@nanmean,1);
numel(x0)
ans =
24
Nope. Works fine here. In fact, I could probably have used a smaller stride than 1 as a tolerance, and still survived.
[x0,y0] = consolidator(x,y,@nanmean,.9);
numel(x0)
ans =
24
Really though, no tolerance was ever needed, as I showed. In fact, I simply cannot reproduce the exact behavior that you claim to have gotten. UNLESS of course, you really don't have what you say. If the numbers in x are not indeed whole numbers as you claim, then problems arise.
Regardless, for the problem you claim to have, you never needed to use any tolerance there at all.
Best Answer