MATLAB: Is textscan ever better than readtable

data importMATLABreadtabletextscan

I have some legacy code that reads in data from a text file. The author seems to avoid using readtable() in favour of using textscan() to get a cell array of strings and then converting the strings to the correct format afterwards. This seems like an awkward way of doing things, and takes a long time for big files so my questions are:
  • Is there any obvious reason to do this? Is textscan somehow more flexible/robust than readtable?
  • Is readtable optimised for reading data in a specified format? (i.e. faster than reading a string and converting)

Best Answer

readtable internally calls textscan, but does a lot of work before to automatically detect the format of the file and after to split the data into variables of the correct type. So, a properly designed call to textscan and direct conversion to a table is always going to be faster than going through readtable.
What little you may lose in speed (file i/o is probably dominant anyway, so processing speed may not be critical), you make up for it by the flexibility of readtable. readtable is simply textscan on steroid (and it gets better with each release) so unless it is demonstrably slower I would always use it.
Note that early readtable wasn't as good at the autodetection. As said it gradually improved with each release since R2013b. The introduction of the import options in R2016b really makes it powerful.