I am the author of the ConTeXt module t-vim
which is similar to minted
but uses vim
rather than pygmentize
to generate syntax highlighting. The t-vim
module actually delegates the task of running external programs to t-filter
module, which is provides the necessary pluming to call external programs on the content of an environment.
By default, the t-filter
module behaves in the same manner as minted
package: it writes the contents of the environment to an external file, calls the external program, and inputs the result back to TeX. However, to deal with slow external program, the filter module provides a continue=yes
option. When this option is enabled, the content of each file is written to a separate file and the md5 sum of each file is calculated. The external filter is run only if the md5 sum is changed.
In MkII, this feature is enabled by calling the external program using
\doifmode{*first}
{\executeexternalcommand{mtxrun --ifchanged=\inputfile \externalprogram}}
This calls mtxrun
, the wrapper script for ConTeXt, which calculates the md5 sum of the file (and stores it as filename.md5
) and the the program only if the md5 sum is stored. This is faster than running vim
, but still slow as a new process (mtxrun
) must be executed. To speed things up, I wrap the entire command in a \doifmode{*first}
so that mtxrun
is called only during the first run of a multi-run compilation.
To speed up things further, in MkIV, I use the ConTeXt lua function job.files.run
, which stores the md5 in the tuc
file (similar to aux
file in LaTeX). So the call to the external program is roughly equal to
\ctxlua{job.files.run("\inputfile", "\externalprogram")}
The same method can, in principle, be implemented in minted
. In fact, the mtxrun --ifchanged
method can be incorporated easily, provided that minted writes each environment in a separate file (currently it does not do that).
If you know the table widths in advance you can "seed" the data that LT writes to the aux file so that it gets the correct widths first time, that won't speed up each run but means that it doesn't take several runs for LT to converge. (Basically look at the format of the command Lt writes to the aux file, recording the column widths, and put that into the document preamble.)
It's possible that compilation speed is improved a bit if you increase LTchunksize, with modern TeX memory requirements you can probably increase that a lot, so the whole table is processed in one chunk.
If you really know all the widths, and don't need any fancy spanning column behaviour, there is always the option of not using the TeX alignment methods at all and just making each row be a row of fixed-width hboxes. that saves TeX the bother of saving all the data in unset boxes, and working out the column widths.
Of course the time taken depends rather on how complicated the cells are, if you got rid of all the table markup and just set each cell as a paragraph, that wouldn't give the layout you want but would give a limit on the achievable time.
Best Answer
Beware of Greeks bearing gifts... Apparently, the nice
todonotes
package that I used dragged Tikz along with it, and introduced a major slowdown. Switching to simple marginpars gave a nice speedup. It should be noted that this package is a performance Trojan horse.