I have a number of Excel files with many channels of data as well as an "Absolute Time" column in each. I can read the data in with no issue, but I am processing the "raw" data from the [~,~,raw] = xlsread(…) function because the time is in the unusual DDD:HH:MM:SS.sssssssss format (where DDD is a three digit number for the day of the year). As far as I can tell the datenum and similar functions can't work with this format. So I'm processing it manually by breaking the string into pieces at the colon character, converting the pieces to numbers and multiplying them out. However, processing this column of data stacks up to about half of the total time for processing my files (using the profiler), so I'd like to figure out a more efficient way of doing it.
Here is an excerpt of column A of the spreadsheet:
... AbsoluteTime DDD:HH:MM:SS.sssssssss Absolute Time 131:17:31:20.000000000 131:17:31:20.050000000 131:17:31:20.100000000 131:17:31:20.150000000 131:17:31:20.200000000 131:17:31:20.250000000 131:17:31:20.300000000 ...
There are an arbitrary number of comment lines at the top of the file (hence the initial …). After passing over the comment lines, I arrive at setting an index called varNameRowIdx to the line which has the "DDD:HH:MM:SS.sssssssss" string. I actually don't care whether I end up with absolute times or times relative to the first time value, but I do need to be able to properly handle the case where it rolls over to the next day within the data. Right now, I'm grabbing the first time value ( raw{(varNameRowIdx+2),1}) in the first sheet ( jj of 1) of the first file ( ii of 1) and am then offsetting all other time values by that t0 to create relative time.
% define first time value of first sheet of first file as t = 0
if 1==ii && 1==jj t0 = str2double(strsplit(raw{(varNameRowIdx+2),1},':')); t0 = t0(1)*86400+t0(2)*3600+t0(3)*60+t0(4); end % specially process time column
idx = 1; tvec = zeros(size(raw,1)-(varNameRowIdx+1),1); for kk=(varNameRowIdx+2):size(raw,1) tt = str2double(strsplit(raw{kk,1},':')); tt = tt(1)*86400+tt(2)*3600+tt(3)*60+tt(4); tvec(idx) = tt-t0; idx = idx+1; end
The tt = str2double(strsplit(raw{kk,1},':')); line is the one that's really expensive, with the strsplit and str2double functions taking 21 and 23% of the total time, respectively. I'm sure there's a better way to process the time stamp data than one element at a time, but I'm not sure what it is. Any suggestions?
Best Answer