\makeatletter
\def\@ifismacro#1{%
\begingroup\escapechar=-1
\edef\x{\endgroup\def\noexpand\first{\string#1}}\x
\begingroup\escapechar=`\\
\edef\x{\endgroup\def\noexpand\second{\string#1}}\x
\ifnum\pdfstrcmp{\first}{\second}=\z@
\expandafter\@secondoftwo % no backslash in front
\else
\expandafter\@firstoftwo % backslash in front
\fi}
\def\report#1{\@ifismacro{#1}{\message{CS}}{\message{NON CS}}}
\makeatother
\report{A}
\report{\"}
\let\pippo=a
\report{\pippo}
The problem with this approach is that is not completely expandable, as it relies on assignments to \escapechar
, while being independent of the value it has at the moment the test is performed.
This test distinguishes the last case, which is not possible with \ifcat
. Nor with \ifcsmacro
of etoolbox, it seems.
I am pleased to be able to teach Martin Scharrer something he didn't know :)
A fully expandable sanitizer
The following is an implementation of a \Sanitize
command that:
Completely removes all control sequences and balanced braces in its argument.
Does not choke on nested braces.
Keeps spaces where they were requested either by " " or "\ " (for after macros).
Is fully expandable (i.e. can be put in \edef
or \csname
).
Edit: This is a revised version. My initial code had a few minor bugs that were a major pain to fix, and this is substantially rewritten. I think it's clearer, too.
How it works
There are three states: sanitizing spaces, sanitizing groups, and sanitizing tokens. We scan for "words" one at a time, then within each "word" look for groups that might be hiding spaces (TeX's macro scanner will only absorb delimited arguments with balanced braces). Finally, once we are satisfied that we are looking at genuinely contiguous tokens, we scan one at a time and throw out the ones that are control sequences, leaving only explicitly specified spaces (" " or "\ ").
From the inside out, the operation looks like this:
\SanitizeTokens
is a big nested conditional that tests its argument against the various special cases. During the sweep for spaces, all space characters were converted to \SanitizedSpace
tokens, and they are now converted to \RealSpace
s. Both \SanitizedSpace
and \SanitizeStop
are macros that expand to themselves, and since they are private, this means that testing against them via \ifx
is a reliable way to detect the exact control sequences (in the first version, these were \countdef
tokens, which have the same property but are not quite as private).
\SanitizeGroups
uses the tricky \def\SanitizeGroups#1#{
construction discussed in this question: Macros with # as the last parameter. It is the most legitimate such use I can imagine: its point is to detect groups, which you can't do using plain macro expansion in any other way. It guarantees that #1
has no groups in it, and since this comes after space elimination, it also has no spaces in it, so we can run \SanitizeTokens
straight away. We then "enter" the group and go back to eliminating spaces.
\SanitizeSpaces
uses pattern matching to grab the first chunk of text up until a space, excluding of course those spaces that are in groups. There is a technical trick here: every use of this macro has {}
right after it, before the text. The point of that is so that the argument scanner doesn't remove braces around a group constituting an entire "word" between spaces. If that happens, then we will erroneously treat it as though it's been cleared of spaces when, in fact, it has not. (Any unsanitized spaces would be eaten by \SanitizeTokens
because argument scanning ignores spaces.)
There are of course some cute utility macros. My favorite is \IfNoGapToStop
, which is called like this: \IfNoGapToStop.X. \SanitizeStop
, with X
being the quantity potentially containing a gap. If it has none, then the first gap is the visible space after the period; if it has a gap, then the two periods are in different components, and both arguments of \IfNoGapToStop
are nonempty.
Aside from the structural changes from the previous version, this one correctly preserves spaces at the boundaries of groups. (That version didn't explicitly scan for groups, but eliminated them as a side effect of absorbing tokens. That works, but it also makes it impossible to be sure when you are looking at a group, which may have spaces, rather than a single token.)
Oh, and of course: the algorithm is no longer stupid. The last version rescanned the entire initial portion of the text repeatedly while looking for words (the point of that was so as not to "lose" those tokens before sanitizing them). Now I crawl through the words one at a time, so there's no problem with abandoning each one when looking for the next. That turns a quadratic algorithm into a linear one.
This is not my preferred way of writing TeX anymore (for that, you should read this answer: How to write readable commands) but pgfkeys
is really not the tool for this kind of textual parsing.
\documentclass{article}
\makeatletter
\newcommand\Sanitize[1]{%
\SanitizeSpaces{}#1 \SanitizeStop
}
% This loops through and replaces all spaces (outside brace groups) with \SanitizedSpace's.
% Then it goes for the control sequences.
% All calls to this should put a {} right before the content, to inhibit the gobbling of braces
% if there is a group right at the beginning.
\def\SanitizeSpaces#1 #2\SanitizeStop{%
\IfEmpty{#2}% Last word
{\IfEmpty{#1}% No content at all
{}% Nothing to do
{\SanitizeGroups#1{\SanitizeStop}}%
}%
% No need for a trailing space anymore: there's already one from the initial call
{\SanitizeGroups#1\SanitizedSpace{\SanitizeStop}\SanitizeSpaces{}#2\SanitizeStop}%
}
% Sanitize tokens up to the next group, then go back to doing spaces.
\def\SanitizeGroups#1#{%
\SanitizeTokens#1\SanitizeStop
\EnterGroup
}
% Sanitize the next group from the top.
\newcommand\EnterGroup[1]{%
\ifx\SanitizeStop#1%
\expandafter\@gobble
\else
\expandafter\@firstofone
\fi
{\SanitizeSpaces{}#1 \SanitizeStop\SanitizeGroups}%
}
\newcommand\SanitizeTokens[1]{%
\ifx\SanitizeStop#1%
\else
\ifx\SanitizedSpace#1%
\RealSpace
\else
\ifx\ #1%
\RealSpace
\else
\if\relax\noexpand#1%
\else
#1%
\fi
\fi
\fi
\expandafter\SanitizeTokens
\fi
}
% We use TeX's proclivity to eat braces even for delimited arguments to eat the braces if #1
% happens to be just {}, which we put in.
% Even if we didn't put it in, {} is going to get thrown out when \SanitizeSpaces gets to it.
\newcommand\IfEmpty[1]{%
\IfOneTokenToStop.#1\SanitizeStop
{% #1 has at most space tokens
% and thus is nonempty if and only if there is a gap:
\IfNoGapToStop.#1. \SanitizeStop
}
{% #1 has non-space tokens
\@secondoftwo
}%
}
% Checks for a gap in #1, meaning #2 is nonempty
% This should only be used with \IfEmpty
\def\IfNoGapToStop#1 #2\SanitizeStop{%
% It's enough to check for one token, since #2 is never just spaces
\IfOneTokenToStop.#2\SanitizeStop
}
\def\IfOneTokenToStop#1#2{% From \IfEmpty, #1 is always a .
\ifx\SanitizeStop#2%
% If #2 is multi-token, the rest of it will fall in the one-token case and be passed over.
% If not, well, that's what we asked for.
\expandafter\@firstoftwo
\else
\expandafter\GobbleToStopAndSecond
\fi
}
\def\GobbleToStopAndSecond#1\SanitizeStop{%
\@secondoftwo
}
\makeatother
\def\SanitizeStop{\SanitizeStop}
\def\SanitizedSpace{\SanitizedSpace}
\def\RealSpace{ }
\begin{document}
\setlength\parindent{0pt}\tt
% Torture test
\edef\a{%
\Sanitize{ Word1 \macro{Word2 Word3}{\macro\ Word4}{ Word5} {Word6 }{}Word7{ }{{Word8}} }
}\meaning\a
\a
\medskip
% Examples
\edef\a{%
\Sanitize{\emph{This} sentence has \TeX\ macros and {grouping}. }
}\meaning\a
\a
\medskip
\edef\a{%
\Sanitize{{A}{ gratuitously {nested} sentence {}{{with many} layers}}.}
}\meaning\a
\a
\medskip
\end{document}
Best Answer
Why? questions can not really be answered except by the person who originally designed the system. But in most languages (certainly most languages of the era) the grammar for names of a language is defined by explicitly listing the allowed characters rather then listing terminating characters. In c or fortran or most other programming languages
abc+xyz*rst
would be three variable tokens separated by the operator tokens+
and*
so it is hardly uncommon.Unlike those languages though, almost none of the lexical rules in TeX are fixed so if you want to allow
+
and*
in multi-letter command names you just needand you can then define
\foo*+
as a command, however\alpha+\beta
would no longer work you would have to do\alpha +\beta
.It isn't really accurate to say
the
*
isn't (in general) an argument to\mycs
it is simply the next token in the output stream, conside\alpha*\beta
where the*
is simply typeset as an infix operator between the tokens.