Don't know that it'll help your run time unless you can minimize the lags over which you run it, but the basic way to overlay the two is...
c=xcorr(data(:,1),data(:,2),'coeff');
x3=[ceil(length(c)/2)+1:length(c)]';
[~,ic]=max(c(x3));
y=circshift(data(:,2),ic);
plot([1:length(data)]',[data y])
legend('1','2','shift')
ADDENDUM 1: Certainly if the data as given are representative, one way to speed it up would be to eliminate the data where both are zero or even at/below the threshold level that you've set. Whether you need to keep those pre- and post-trigger lengths around to adjust the overall sample length in the end depends on the application, of course.
ALTERNATIVE: Again on the presumption the data are representative; w/o the cross-correlation bottleneck. This does depend upon the rise/fall times being as clean as are here and that one can find a suitable threshold for each pair expeditiously...
It needs must be above the early noise into the fast-rise/fall area and yet must not intersect the lower middle level of the lower=magnitude signal. Also it must be a value that isn't in the dataset to make the subsequent test robust. Since it appears that the data are integer-valued, one can assure that by using a fractional value as the threshold.
th=30.5;
d=diff(sign(data-th));
id=[find(d(:,1)==2) find(d(:,2)==2);find(d(:,1)==-2) find(d(:,2)==-2)]
id =
610 437
715 542
shft=diff(id,[],2)
shft =
-173
-173
shft=floor(mean(diff(id,[],2)));
yd=circshift(data(:,2),abs(shft));
Quick, but more sensitive to noise by far...salt to suit! :)
Best Answer