Warnings of the form:
LaTeX hooks Warning: Generic hook 'file/after/<name>' is deprecated.
(hooks) Use hook 'file/<name>/after' instead.
are due to a recent change in the LaTeX kernel in which we normalised generic hooks to have the variable part in the middle, because we had env/<name>/after
and file/after/<name>
which was simply confusing. Now the file
, package
, class
, and include
hooks have the same form as other hooks: file/<name>/after
.
To avoid complete breakage of thousands of documents (including yours, dear reader), the old hook names will be available for a while, until packages (like translations
) have time to adjust. The warning is just there as a reminder, but it is completely harmless for your document, so there is nothing to worry about (except maybe ask the package author for an update :).
Just for the sake of discoverability by search engines, similar warnings will be:
LaTeX hooks Warning: Generic hook 'package/after/<name>' is deprecated.
(hooks) Use hook 'package/<name>/after' instead.
LaTeX hooks Warning: Generic hook 'class/after/<name>' is deprecated.
(hooks) Use hook 'class/<name>/after' instead.
LaTeX hooks Warning: Generic hook 'include/after/<name>' is deprecated.
(hooks) Use hook 'include/<name>/after' instead.
You can use \AddToHookNext{<hook>}
which adds code to be executed only the next time the <hook>
is called. The document below prints 5 pages, and the third has a red circle in the middle:
\documentclass{scrartcl}
\usepackage{tikz}
\usepackage{blindtext}
\begin{document}
\blindtext[10] % blank line below required
\AddToHookNext{shipout/background}{\put(0,0){%
\begin{tikzpicture}[remember picture, overlay, shift={(current page.center)}]
\draw[fill, color=red] (0,0) circle (2cm);
\end{tikzpicture}%
}}
\blindtext[10]
\end{document}
The blank line between the first \blindtext
and \AddToHookNext
is needed so that TeX processes \blindtext[10]
(breaking it into paragraphs and those paragraphs into pages, thus shipping pages out) and then when you call \AddToHookNext
you are on the right page. If you don't add the blank line, by the time you call \AddToHookNext
TeX is still processing the first page (even though it has two full pages of content stored in memory), so the circle ends up in the wrong place.
Another way, if you want to place the circle in a specific page (rather than "right here") is to test for the page number:
\documentclass{scrartcl}
\usepackage{tikz}
\usepackage{blindtext}
\begin{document}
\blindtext[10]
\AddToHook{shipout/background}{%
\ifnum\value{page}=3 % only on page 3
\put(0,0){%
\begin{tikzpicture}[remember picture, overlay, shift={(current page.center)}]
\draw[fill, color=red] (0,0) circle (2cm);
\end{tikzpicture}%
}%
\fi}
\blindtext[10]
\end{document}
If you want to turn your code on and off in the hook in the middle of the document (page number unknown) then you need a finer-grained control provided by rules.
You can, as in the example below, label the circle code as ./mycircle
, and add another blank chunk of code labelled ./stop-mycircle
. By default, both chunks added are executed (the second is blank, so it does no harm). Then, when you want to deactivate the circle background you write
\DeclareHookRule{shipout/background}{./stop-mycircle}{voids}{./mycircle}
so that the presence of ./stop-mycircle
in the hook will stop ./mycircle
from executing. Then you can write
\ClearHookRule{shipout/background}{./mycircle}{./stop-mycircle}
to remove that "voids
" rule and allow ./mycircle
executing again.
The big advantage of this method is that you can use these commands on and off at your will: none will act destructively on the code you added to the hook, so you can always revert. In fact, \RemoveFromHook
is better, in most cases, replaced by voids
.
Here is the code (whose output contains circles on pages 3, 4 and 5 only):
\documentclass{scrartcl}
\usepackage{tikz}
\AddToHook{shipout/background}[./mycircle]{\put(0,0){%
\begin{tikzpicture}[remember picture, overlay, shift={(current page.center)}]
\draw[fill, color=red] (0,0) circle (2cm);
\end{tikzpicture}}}
\AddToHook{shipout/background}[./stop-mycircle]{}
\newcommand\circleon{%
\ClearHookRule{shipout/background}{./mycircle}{./stop-mycircle}}
\newcommand\circleoff{%
\DeclareHookRule{shipout/background}{./stop-mycircle}{voids}{./mycircle}}
\circleoff % initially off
\begin{document}
page 1 \clearpage
page 2 \clearpage
page 3 \circleon \clearpage % start showing on page 3
page 4 \clearpage
page 5 \clearpage
page 6 \circleoff \clearpage % no longer show on page 6
page 7 \clearpage
\end{document}
\ClearHookNext
shouldn't be read as "clear the hook after the next execution", but as "clear the 'next execution' code", so it removes code added with \AddToHookNext
. To remove code added with \AddToHook
you have to use either \RemoveFromHook
or add another code label with a rule that voids
the one you want to remove.
Best Answer
There are roughly two ways to patch a command: via
\scantokens
, and via expansion+redefinition. There's a (not so) brief explanation of both at the end of this answer. Whenltcmdhooks
can detect the type of command, so that it knows exactly the<parameter text>
of the command, it patches by expansion+redefinition, so it has no restriction on the catcode settings in force when the macro was defined. In the case of\appendix
, it takes no arguments, so it can be treated as a token list and expanded, then redefined with the added material.For example, here's a simple sketch of how it works:
However, what I did not anticipate when I wrote that code, is the case when the original definition of
\appendix
contains##
(try this definition in the code above):When
\appendix
is defined like that, TeX's definition scanner sees#6#6
, and replaces that by a single parameter token#6
in the definition of\appendix
, so far so good. However when you expand the command, TeX also returns a single#6
, and then when you try to redefine the command you have:which contains an illegal parameter (
#B
), and the definition errors.I have changed
ltcmdhooks
to handle this case (there's a brief explanation below), but meanwhile you can use\ActivateGenericHook
(or\ProvideHook
in LaTeX 2021-06-01) to tellltcmdhooks
that you have already patched the command, so it won't try patching, then you do the patching manually usingetoolbox
:Why the above works
The interface for
ltcmdhooks
in\AddToHook
is supposed to work as follows:If an end user writes
\AddToHook{cmd/name/before}{code}
, and the hookcmd/name/before
doesn't exist yet (which implies that the command\name
doesn't have that hook "installed"), then the code tries to patch that hook in the command.If the end user writes
\AddToHook{cmd/name/before}{code}
, and the hookcmd/name/before
already exists, this (probably) means that the command\name
already has that hook, so it just adds the code to the hook, and leaves the command be.This means that a package author may want to fine-tune the position of the
cmd/name/before
hook (for example,\def\name{<some initialization>\UseHook{cmd/name/before}<definition>}
), then we don't wantltcmdhooks
patching the command again (it would be wrong to add the same hook twice), so we tellltcmdhooks
that the hook already exists by saying\ActivateGenericHook{cmd/name/before}
, then patching is no longer attempted.This works for your case because you then manually add the hook to the command, and then tell
ltcmdhooks
that pathching is no longer needed. See section 3 Package Author Interface of theltcmdhooks
documentation.So in essence, you, as the package author, are appropriating the
\appendix
command, by adding the hook yourself (exactly whereltcmdhooks
would add it), and then tellingltcmdhooks
to not patch it by using\ActivateGenericHook
.If instead of
\appendix
you were adding hooks to\UniqueCommandFromMyPackage
, then you could use\NewHook
instead of\ActivateGenericHook
(the effect would be identical), because there would be no possibility of a name conflict.How LaTeX2ε handles this case now
The problem: Turns out in the described case we're in a dead-end. When you write a definition like
TeX stores its
<replacement text>
as a token list containing:(
out_param 1
is#1
to be replaced by the actual parameter when the macro is expanded,par_token #
is a catcode 6#
, andletter X
is a catcode 11X
).Then, when you expand
\foo
with#1
(par_token #, character 1
), TeX replacesout_param 1
and you have:which is equivalent to typing
#1#X
. If you plug that back into a new definition of\foo
you'll have:which is obviously wrong (and thus the
Illegal parameter number
error). And at this point you have no way to tell what was an actual parameter when the macro was defined, and what was a single parameter token.Half solution: There is one very simple case that can be easily detected and solved (which coincidentally is the one in your question): a macro without parameters. In this case, the macro has no argument, so any loose
##
in its definition cannot possibly be confused with a parameter, so we can treat this such macros as token lists (in theexpl3
sense) and do something akin to\tl_put_right:Nn
and problem solved.Another relatively simpler case is when the macro has no
##
in its definition. In this case we don't have to worry about confusing parameters, so we treat the macro normally (this was the case implemented initially). LaTeX uses a rather simple loop to check if a macro has a parameter token in its definition (\__hook_if_has_hash:nTF
): it looks at every token in the defintion, and compares it with#
.The other half: When the macro falls into the general case of having both parameters and parameter tokens in its definition (like
\foo
above), then we have to manually re-double every parameter token in the definition, so that it can be re-made. To do that, instead of expanding\foo
with#1
, LaTeX expands it with\c_@@_hash_tl
, so\foo{\c_@@_hash_tl}
becomes a definition like:then we loop through the replacement text of the macro (inside the braces) and double every
##
, and replace every\c_@@_hash_tl
by a single#
, which then gives:and then we can do the definition normally (phew!)
Patching with
\scantokens
(wordier description here)
Suppose a macro defined with
To append some code to it via
\scantokens
, you first do\meaning\mycmd
to get a string like:(with usual
\detokenize
catcodes: all 12 except spaces, which are catcode 10), then you use a delimited macro to separate the<prefixes>
, the<parameter text>
, and the<replacement text>
, roughly like this:(I'm using
\def\prefixes{#1}
, etc. for the sake of understandability, but in reality you would inject everything expandably instead; see the definition of\__kernel_prefix_arg_replacement:wN
inexpl3-code.tex
, and\etb@patchcmd
inetoolbox.sty
if you're feeling brave).At this point you have every part of the definition as a string separately. Now you can either append or prepend some code to
\replacement
(or replace some part of it, as it's done in\patchcmd
), or in rarer cases change\prefixes
or\parameter
. At this point you have three strings, each of which is a part of the definition. To reconstruct the definition you need:but the three parts you have are still catcode 12 tokens, which are no good. Here comes the
\scantokens
part: you rescan those strings back to "normal" tokens:which, after
\expanded
does its job, becomes:then
\scantokens
does its thing and turns everything into tokens using the current catcode settings, and then the definition is carried out normally.The advantage of this method is that you can do virtually any manipulation in any part of the definition.
The disatvantages are a few:
\meaning
–\scantokens
doesn't change the meaning of the macro) otherwise you can't patch safely;\edef
and\detokenize
to forcibly make some catcode 12 tokens, you will probably not be able to patch that macro (for example,\splitaux
as defined above in this answer cannot ever be patched with\patchcmd
because it contains letters (for examplem
) of both catcodes 11 and 12);<parameter text>
of the macro contains the characters->
, you won't be able to patch the macro.Patching with expansion+redefinition
This method is much simpler, but requires previous knowledge of how the macro was defined. This can be done in few cases, namely when you know exactly what the
<parameter text>
of the macro is. The cases known by the kernel are when the macro was defined with\DeclareRobustCommand
, or withltcmd
(\NewDocumentCommand
or\NewExpandableDocumentCommand
), or with\newcommand
with an optional argument, or when the macro takes no argument.Suppose the same macro from before, but defined with:
(it will have an internal macro called
\\mycmd
, but for the sake of simplicity let's call it\mycmd
as well), then we know for sure its<parameter text>
is[#1]#2
. Knowing what arguments the macro expects, we can feed it#1
,#2
, ... as arguments, so for\mycmd
we would do:which would then expand to the
<replacement text>
of the macro, with the first parameter (#1
) replaced by#6112
(the parameter token#
followed by the character1
). The patching scheme would be something like:then after the
\expanded
is done you are left with:which is exactly what you had with the
\scantokens
approach, except that you didn't turn tokens into a string, so catcodes don't matter at all here.The advantages of this method are roughly the disadvantages of the
\scantokens
method:\splitaux
macro from before) using this method given you know exactly what its<parameter text>
is;<parameter text>
of the macro may contain any token your heart desires (as long as you know what token it is); andThe disadvantage is the requirement for the method to work: you need to know exactly what the
<parameter text>
is.