Lisp featured support for Garbage collection, exceptions done right, macros, optional typing, native compilation. In fact I would argue that many in the lisp community were so obsessed with the hard bits that they overlooked building simple things that see everyday use in the real world.
Quite some bits were implemented in Lisp. Both simple and hard. Some of the bits are lost.
For example the TI Explorer Lisp Machine came with an X11 server written in Lisp. On my Symbolics Lisp Machine I used the usual MIT X11 server written in C - this was possible because the Symbolics Lisp machine had a C compiler.
I was under the impression that those machines evaluate Lisp forms at the hardware. How could a C compiler work? Does it translate your code into Lisp s-expressions, or is there like a bytecode layer in between?
You could translate to either Lisp or raw instructions for the Lisp machine, but it doesn't really matter. The main thing that you do differently from a normal C compiler is translate pointers into pairs of a buffer and an index into the buffer. Then pointer arithmetic still works as far as it is required to work by the C spec.
What do you think a compiler does? The target instruction set can be any Turing-complete language. Compiling C to LISP is no different than compiling LISP to C (e.g., GNU Common Lisp), compiling ML to Java (which is so easy students can do it in 300/400-level CS), and so on.
Well, the naïve way is to allocate a big block of memory for your C program and manage it using malloc/free like normal (basically, use the LispOS equivalent of sbrk(2)). Alternatively, since pointers are pointers, nothing stops you from having malloc(3) call out to the Lisp runtime's GC. (free(3) is a no-op.) That's basically how the Boehm G. works. I'm not a systems programmer so I'll admit this is a bit hand-wavy. Also, note that malloc/free are not part of the core C language, per se, but the standard library (a/k/a the C runtime). Even if they were in the core language spec, you as the compiler author can implement them however you choose. They aren't magical.
How would a language's support for garbage collection make it any harder to compile a non-garbage-collected language into it? Why would you think that?
There's no code to hint at how garbage collection should be done, since its automatic. You have to essentially include a library that would emulate the source language gc that it came from so its not a direct translation from one language to another. At least that's what I was thinking
Not really. None of the relevant (= you could buy them and use them) Lisp Machines did not evaluate Lisp forms - the evaluator was a program, too. They all used a processor which provided a special instruction set and Lisp datastructures. For example the processor of the Symbolics Lisp Machine was a mostly stack machine running compiled Lisp code.
You can compare it to running JVM instructions in hardware.
Thus the Symbolics Lisp Machine had C, Ada, Pascal and other languages implemented for it.
But so as I recall was the virtual machine that interpreted the bytecodes the compiler produced.
Here I'm referring to what I remember being told about the CADR's microcode (it had a LOT, with options for more if you were willing to buy more $$$ static RAM), but the others were all direct descendants and I gather did pretty much the same things.
It's not that it doesn't implement the hard bits, it just re-implements the hard bits on top of the hard bits. It requires an OpenGL context, which generally requires you to have an X server running.