MATLAB: How to parse poorly formatted txt file

complex txt fileparsingtextscan

Hello,
I'm working with modeling software that outputs data in badly formatted .txt files, there is a screenshot of the output below. Each output file contains 10,000 data blocks which begin with the highlighted "1tally" string, and end several lines later with a decimal number.
Ideally I need to be able to pull the 1tally string, the floating number following it (14 and 24 in the picture), and the remaining two values in each data set (6.47566E-07 0.0187 and 6.93514E-07 0.0181). I've tried using textscan options to locate the 1tally string but I'm not familiar enough with matlab to write the loop to keep seeking the remaining 9,999 data blocks. I can't use the 'HeaderLines' option because the entries are not on the same row in every file, and even in a single file the number of rows between data blocks will vary anywhere between 1 and 500.
Any help or advice would be greatly appreciated.
Edit: I can't post the full output file, but I've attached a shortened version. The formatting is the same as what I need to the code for. The only difference would be the number of rows between the start of the file and the first occurrence of 1tally

Best Answer

I would probably use fileread() to read the entire file into a string, and then I would probably use regexp() with named tokens. It might not be bad... something like
regexp(S, '(?<=1tally\s+)(?<tallyno>\d+)(?:.*?)(?<last2>\S+)(?:\s+)(?<last1>\S+)(?=\s+=)', 'names')
This looks for 1tally followed by whitespace, then puts the decimal digits that follow that into the field 'tallyno'. Then it skips as few characters as possible to satisfy what comes after. Then it captures a bunch of non-whitespace items into a field named 'last2', after which it skips whitespace and then captures a bunch of non-whitespace items into a field named 'last1'. After that it skips whitespace, and after that it is mandatory that there is an "="
I would need an extract of the file to test the expression to be certain.