Old techniques are not necessarily more fundamental, they are just more primitive. (And often, but not always, more efficient.)
In particular, old compiler texts will often do things like emit code directly from the parser, try to minimise the number of AST passes, or keep complex global symbol tables separate from the AST. These are mostly workarounds for technical limitations that are no longer relevant, and in fact only obscure the principles being taught (code generation during parsing is my pet example of this).
This is pointedly not about the Old Techniques in compiler writing.
For example, the author doesn't divide the compiler into multiple passes, so it can fit in RAM; he assumes you have "enough" RAM for a simple compiler to fit, and goes from there.
Similarly, he jumps right into recursive code, because that's simpler, and he assumes your computer has a stack of reasonable depth. (Go back far enough and computers had really shallow stacks. Go back farther and computers didn't have call stacks at all.)
Finally, his compiler doesn't optimize the code, because he assumes that the obvious code will run sufficiently fast. A fairly modern idea, and one which removes the complexity which would otherwise drive the design and force things like multiple passes and complex internal representation.
Isn't code generation during parsing still common today? In particular, bytecode generation in interpreters (and JIT compilers) for scripting languages, e.g. Lua?
It's sometimes a good idea to do it that way in practice, but it's still a conflation of two conceptually distinct processes. I think it is a bad approach when teaching compiler implementation, as it means you avoid the extremely core concept of an abstract syntax tree.
But it isn't a core concept if you do not need it. And an AST builder can be "injected" between the parser and codegen at a later point in time, if needed. You do not even need to do it in one go, e.g. if your compiler has something like a "ParseExpression" (assuming recursive descent parsing that spits out code as it parses), you can start by making a partial AST just for the expressions and leave everything else (e.g. declarations, control structures, assignments - assuming those aren't part of an expression, etc) as-is.
This is useful for both practical and teaching purposes: for practical because it keeps things simple in case the additional complexity isn't needed (e.g. scripting languages) and for teaching purposes because someone learns both ways (which are used in real world problems) while at the same time learning why one might be preferable to the other. And if you do the partial AST bit you even introduce the idea of an AST gradually by building off existing knowledge and experience the student has acquired.
Yes, it is. It also doesn't really make much difference per se, as e.g. Wirth-style compilers still maintain careful separation of the code generation and parsing.
And if you want to/need to later, you can trivially introduce an AST in those compilers by replacing the calls to the code-generator with calls to a tree builder, and then write a visitor-style driver for the code generation.
Yes, it's work of course, but it's quite mechanical work that requires little thought.
Instead of calling the code emitter, you call an AST builder.
Then you build a tree walker that call the code emitter.
At least the Wirth Oberon compiler was retrofitted with an AST by at least one student in Wirth's group as part of experiments with optimization passes.
In particular, old compiler texts will often do things like emit code directly from the parser, try to minimise the number of AST passes, or keep complex global symbol tables separate from the AST. These are mostly workarounds for technical limitations that are no longer relevant, and in fact only obscure the principles being taught (code generation during parsing is my pet example of this).