While a definitive answer can only come from the Stanford team involved in development of TeX, and from Professor Knuth in particular, I think we can see some possible reasons.
First, Knuth designed TeX primarily to solve a particular problem (typesetting The Art of Computer Programming). He made TeX sufficiently powerful to solve the typesetting problems he faced, plus the more general case he decided to address. However, he also kept TeX (almost) as simple as necessary to achieve this. While expandable macros are useful, they are not required to solve many issues.
Secondly, there are cases where an expandable approach would be at least potentially ambiguous. Bruno's \edef\foo{\def\foo{abc}}
is a good case. I'd say that here the expected result with an expandable \def
is that \foo
expands to nothing, but I'd also say this is not totally clear. There is the much more common case where you want something like
\begingroup
\edef\x{%
\endgroup
\def\noexpand\foo{\csname some-macro-to-fully-expand\endcsname}%
}
\x
which would be made more complex with expandable primitives.
The above example points to another grey area: what would happen about things like \begingroup
and more importantly \relax
. The fact that the later is a non-expandable no-op is often important in TeX programming. (Indeed, the fact that \numexpr
, etc., gobble an optional trailing \relax
is sometimes regarded as a bad thing.)
Finally, I suspect that ease of implementation is important. The approach of having separate expansion and execution steps makes the flow relatively easy to understand, and I also suspect to implement. An approach which mixes expansion and execution requires a more complex architecture. Here, we have to remember when Knuth was writing TeX, and the fact that programming ideas which we take for granted today were not necessarily applicable in the late 1970s. A fully-expandable approach would I suspect have made the code more complex and slower. The speed impact is one that was important when TeX was running on 'big' computers.
When TeX absorbs the replacement text of a macro for \def
it performs no expansion whatsoever. To the contrary, when it does \edef
it expands every expandable token recursively, with some exceptions.
The exception is that tokens resulting from the expansion of \the<token register>
are not expanded further. The same for tokens resulting from \unexpanded
, that's very similar to using an unnamed token register. The expansion of \noexpand
is empty, but it makes the next token temporarily equivalent to \relax
, so it's not expanded further.
This seems to exclude the possibility of doing \doexpand
. However you can use regular expressions: change every control sequence <cs>
into \noexpand<cs>
and then change \noexpand\doexpand\noexpand
into nothing.
There are several limitations in the following implementation of \pedef
(partial \edef
): \doexpand
must precede a control sequence and active characters are not covered; also parameters to the macro are not allowed, so it's just a proof of concept.
\documentclass{article}
\usepackage{xparse,l3regex}
\newcounter{mycount}
\setcounter{mycount}{42}
\ExplSyntaxOn
\cs_new_protected:Npn \pedef #1 #2
{
\tl_set:Nn \l_tmpa_tl { #2 }
\regex_replace_all:nnN { (\cC.) } { \c{noexpand}\1 } \l_tmpa_tl
\regex_replace_all:nnN { \c{noexpand}\c{doexpand}\c{noexpand} } { } \l_tmpa_tl
\cs_set:Npx #1 {\l_tmpa_tl}
}
\ExplSyntaxOff
\pedef\foo{\textit{\doexpand\arabic{mycount}}}
\show\foo
This outputs
> \foo=\long macro:
->\textit {42}.
Best Answer