Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Lisp, Smalltalk, and the Power of Symmetry (2014) (insearchofsecrets.com)
136 points by saturnian on May 13, 2017 | hide | past | favorite | 58 comments


The thing about both Lisp and Smalltalk that keeps making me feel alienated is that their power seems much weaker beyond their kingdom. The outside world does not have an object browser, nor is it made of s-expressions.

Tcl occupies a very nice place in this regard: its homoiconicity and symmetry (and late binding) come from text. The outside world, to a very close approximation, is also made of text. Subprocesses, sockets, FFI, files and user interaction just feel more native - in the image-oriented languages, I always find myself fighting the ambassador who imperfectly represents these things in forms the kingdom understands.

Just a feeling. They're all wonderful languages, and this article speaks well to some of the "why".


>The outside world does not have an object browser, nor is it made of s-expressions.

You'd be surprised.

Joking aside, you seem to fixate on an implementation detail. It's just that the computing world, or rather the Unix on, is "made of text".

The world is actually made of objects and data, and this is closer to Lisp and Smalltalk.


Wonderful article! "Smalltalk, like Lisp, runs in the same context it’s written in."

I have been programming professionally in Common Lisp (off and on) since the 1980s but there is something equally magical about Smalltalk. I have often thought that Smalltalk could be the language I use after I retire (I am in my 60s and I will probably stop working in about ten years).


Indeed, homoiconicity is a very powerful thing. It doesn't have to be core to the nature of the language, though; as far as I know, any Turing-equivalent language readily admits a metacircular interpreter, and so really a homoiconic language is a language with a compiler in the standard library.

As a thought experiment, imagine Lisp without macros. It's not hard; after all, "The Little Schemer" covers metacircular interpretation without ever mentioning macros. So what's going on? Apparently we don't need macros! But, we could add macros to a Lisp by reifying them in the metacircular interpreter. There's actually a feature in plain sight which makes this possible, and it's the humble (quote) special form. This is what makes code and data intermix so cleanly in Lisp.

This is why languages like Julia and Monte are not shy about using "homoiconic" to describe their language design; a standard library compiler is just as good as a compiler in the core semantics, as long as it's easy to use and meshes well with the rest of the language.


No, this is incorrect. The syntax and the AST must be isomorphic for a language to be homoiconic. It's not enough to expose the compiler/AST as a first-class library.

Wikipedia has a nice entry on this. In short, "homoiconicity is where a program's source code is written as a basic data structure that the programming language knows how to access."

[1]: https://en.wikipedia.org/wiki/Homoiconicity


According to that page, the term was introduced by the designer of something called TRAC, who used it to denote the idea that the program is stored in the memory in the same form in which the user enters it, which allows it to be inspected and changed. Nothing about ASTs.


And TRAC is all about macros...

TRAC is lots of fun. I read about it in Nelson's book, Computer Lib/Dreams, when I was a freshman at Illinois. That spring, my Dad bought an Altair and I wrote a version of Trac for it, in assembly. Had support for bignum arithmetic, in ASCII :-)

Later, when I learned about tail recursion, I was happy to figure out that my implementation was indeed properly tail recursive.


What you said and what I said are in agreeement, so I'm not sure I follow. "Program stored in memory in the same form in which the user enters it" makes it isomorphic. Which was exactly the point I was trying to make in response to the parent comment.

You cannot simply expose the compiler/AST data structure and call your language homoiconic because the text the user enters and the resulting AST are not isomorphic which is a necessary precondition for homoiconicism.

I may be missing something in what you said though, so please do let me know.


By "first-class", I really did mean "indistinguishable from the rest of the core of the language." Consider Monte:

    def x :Int := 42 # evaluated statement
    def ast := m`def x :Int := 42` # quasi-quoted Monte fragment
    eval(ast, safeScope) # easy evaluation
Now, it happens that m`` is a library written in Monte itself, but that's unsurprising when you consider how much of the Monte compiler is also self-hosting. Since Monte is a complex and rich language, the homoiconic representation is equally rich:

    def m`def @lhs := @rhs` := ast # pattern-matching!
    [lhs, rhs] # [mpatt`x :Int`, m`42`]


The Julia developer have backed away from the claim that Julia is homoiconic; they no longer describe it that way. Nevertheless your points are really interesting, to the extent that I can understand them.


> imagine Lisp without macros.

Early on in my Scheme career, I found the tools to create macros a bit confusing and arcane .. but I still knew I wanted macros.

I ended up writing code transformers - a poor man's macro system if you will, taking my "high level" foo.scm through a couple of translation layers that turned the abstractions I wanted into running code. It was literally:

  $ scheme expand-foo.scm < myprog.scm > myprog1.scm
  $ scheme expand-bar.scm < myprog1.scm > myprog2.scm
  $ scheme myprog2.scm
What made this possible - trivial - was the homoiconicity: simply by (read)ing a program from stdin I had a list of lists that I could pattern match over and make the transformations I wanted. Exactly as my final program did with ordinary data.

In some ways, this was a more satisfying approach than using define-syntax / syntax-case, which differ from the rest of Scheme in somewhat uncomfortable ways. That macros could never be first class eventually put me off, but that's another story :).


> Smalltalk is powerful because all Smalltalk data are programs–all information is embodied by running, living objects.

That's what Lisp systems do too. Program elements like classes, functions, methods, symbols, ... are first class objects. With something like CLOS you have a similar level of object-oriented meta-programming capabilities.

Many Lisp systems offer additionally to execute Lisp data using a Lisp interpreter and Lisp has a simple data representation for Lisp programs: Lisp data.

Smalltalk OTOH uses text as source code and usually a compiler to byte-code.

> because Lisp source code is expressed in the same form as running Lisp code

Only if you use a Lisp interpreter. Otherwise the running Lisp code might be machine code or some byte code.

> Smalltalk goes one further than Lisp: it’s not that Smalltalk’s source code has no syntax so much as Smalltalk has no source code.

That's a misconception. Smalltalk has source code. As text. It's just typically managed by the integrated development environment.

It's actually Lisp which goes further than Smalltalk, because Lisp has source as data and can use that in Lisp interpreters directly for execution.


> Smalltalk OTOH uses text as source code

That is not completely correct. It uses a mixture of text (strings) and objects. The class graph is composed of objects, but the method bodies are stored as objects and (optionally) strings.

To edit the class graph, it presents (parts of) it as text that you can edit (see ClassDescription>>definition in Squeak). E.g. to allow you to edit the Behavior class, it generates the following string and presents it in a text editor:

  Behavior subclass: #ClassDescription
	instanceVariableNames: 'instanceVariables organization'
	classVariableNames: 'TraitImpl'
	poolDictionaries: ''
	category: 'Kernel-Classes'
Notice that this is a Smalltalk statement that can be evaluated. If you edit this strings and accept it, it will evaluate the code which updates the objects describing the class. The primary representation is not textual, but an object graph.

A method is stored as byte code, and optionally as a string. The system will present you with a textual representation that you can edit, which is either the stored string or the decompiled byte code (which loses the original comments, indentation, and variable names). You can strip the textual representation of all methods to slim down the image (see SmalltalkImage>>abandonSources).

You can also file in/out a textual representation of classes and their methods. But that is not the primary representation of the code.


> That is not completely correct. It uses a mixture of text (strings) and objects. The class graph is composed of objects, but the method bodies are stored as objects and (optionally) strings.

All changes to the class graph are also stored as changes in text. Every class has a textual representation. You can load an earlier image and replay this. This is basically like loading Lisp code into a Lisp image.

> it will evaluate the code which updates the objects describing the class.

This is like Lisp. The Lisp code manipulates the runtime class graph.

> But that is not the primary representation of the code.

The primary representation is text. That's what the IDE presents you when you edit the method.


For traditional smalltalk systems (ie. anything except GNU Smalltalk), that textual representation is generated by serializing the object graph. And the traditional text format is not exactly designed to be human editable, also it does not describe classes as self-contained concept, it is stream of expressions interspersed with strings that get magically processed by something that was setup by previous expressions (eg. Behavior>>#methodsFor: switches the deserializer state into this second mode).

The problem with Smalltalk is that you lose all it's power when you stop using the IDE. The idea that Smalltalk's runtime metaobject system is somehow equivalent to Lisp's macros is completely wrong, but the system as a whole is powerful enough to make it look like that it does not need macros, because:

1) the IDE automates many things away (but often by doing textual transformations on the source code) 2) the general ST programming culture builds heavily on monkey patching (which is also usually somehow automated by the IDE)

Even with this, ST is also where the whole idea of design patterns started, with half of the patterns being workarounds around insufficiently expressive language (ie. no macros and multiple different "callable" types/syntax categories).


> All changes to the class graph are also stored as changes in text. Every class has a textual representation. You can load an earlier image and replay this. This is basically like loading Lisp code into a Lisp image.

I forgot about that. The changes files is more like a log file, though (e.g. if you change a class multiple times, the changes file will have multiple versions of that class in it). It is generated as a side effect of manipulating the object graph in the image. The normal development is not to edit the changes file and then load it (which is how I usually edit Lisp code).

> This is like Lisp. The Lisp code manipulates the runtime class graph.

Agree.

> > But that is not the primary representation of the code. > The primary representation is text. That's what the IDE presents you when you edit the method.

My use of 'represention' is wrong here, I should have stuck with source.

For the body of methods I agree with you. For the class graph, the source is the object graph, and the IDE has multiple ways to present it to you. The textual representation is but one representation (generated from the object graph). You can e.g. also display the class hierarchy as a hierarchical list. To manipulate the source, aside from editing text, you can also use commands to rename a class, or delete instance variables in a list view (if I remember correctly in Cincom). And yes, you have similar things in Slime, but then your image gets out of sync with the textual source.

I don't think there is a clear line between development in Lisps and Smalltalks, though. My experience with (Cincom) Smalltalk was that it was much more image based and not file based. That had the drawback that e.g. editors, diff tools and source control were reimplemented in Smalltalk (often slow and buggy). An advantage is that e.g. debugging is more tightly integrated: it is trivial to implement an undefined method inside the debugger and continue, in SLDB adding an undefined method is slightly more clumsy (e.g. the debugger wouldn't know which file to put it in). My experience with e.g. SBCL, LispWorks is that it is much more text/file based. I start from text files and evaluated (parts of) them. Every once and a while I restart the Lisp image and reload from scratch. But you can find Smalltalks that are text based (e.g. GNU Smalltalk) and Lisps that are more image based (e.g. Symbolics).


> Smalltalk doesn’t need macros because it has classes instead.

I'm not sure this is true. Surely any programming language that lacks macros would be more powerful with them.


It might be more accurate to say:

> Smalltalk doesn’t need macros because it has classes, powerful introspection capabilities, and simple expressive syntax (especially blocks) instead.

There's a debate to be had if compile-time macros are superior to passing blocks as arguments. It also is easy to make your language parser extensible or easy to modify without having traditional lisp macros. Metalua does something like this.


Yea, that's a weird statement, however a better one is that Smalltalk doesn't need macros because it has a clean syntax for lambas that remove a common use for macros, hiding boilerplate use of Lisp's lambda. Beyond that, Smalltalk isn't file based, you don't edit some version of the code that gets compiled (and macro expanded) into some runtime version of the code you can only see through introspection; rather in Smalltalk you're actually editing the runtime version in a running image. Macros simply wouldn't fit into Smalltalk in any meaningful way and Smalltalk's syntax is pretty much already ideal for building DSL's without the need to clean it up with macros.

What makes both Lisp and Smalltalk interesting is that there's no difference between language and library; the constructs you create yourself are on equal footing with the ones most consider built in. Macros let you build special forms to control evaluation semantics, Smalltalk simply uses blocks [ ] to delay evaluation, and both languages are their libraries. Lisp has functions/macros, if/cond, etc; Smalltalk has objects used in a way you simply do not see in other object oriented langauges. Smalltalk has no if statement, no while statement, no reserved words beyond true, false, nil, self, super, and thisContext, everything else is library including all control flow constructs which are implemented with objects/classes/inhertance, and polymorphism.

They are both "pure" languages in a sense, and that pleases some people greatly. If you haven't programmed in Smalltalk, you really have no idea what object oriented actually means at a deep level. All of the popular so called OO languages are actually just procedural languages with hundreds of special keywords that have classes, but the languages themselves aren't build from classes and objects, they're procedural and defined by the compiler writer as special forms you cannot create yourself. Having objects, and being truly object oriented all the way down, are drastically different things.


> Beyond that, Smalltalk isn't file based, you don't edit some version of the code that gets compiled (and macro expanded) into some runtime version of the code you can only see through introspection; rather in Smalltalk you're actually editing the runtime version in a running image.

That's not really true. Smalltalk is text-based, too, but hides it behind an integrated source management system. When you edit a method in a Smalltalk IDE, then you edit TEXT. The text then gets compiled to typically some byte code which gets interpreted by the Smalltalk virtual machine (which also might have some way to convert it to machine code).

If the text of the source code is not available, then Smalltalk needs to disassemble the byte code. But the disassembled byte code is not equal to the original source.

The sources are EXTERNALLY kept as text, outside the running system.

Just download your favorite Squeak and check out the contents. There is a huge sources file and there is a changes file. Those are text files with the sources and its changes.

This is actually different from some Lisp system, where the source actually is data inside the running Lisp and the Lisp interpreter runs this data. If you edit this code, Lisp then presents you a structure editor, which works on this data - not on text. It's not what a typical Lisp system does today, but it is still a possibility. Xerox' Interlisp used to use a structure editor for Lisp source code as data and a source code management system based on that.

This is different from Smalltalk, where the 'Interpreter' runs compiled byte-code and the byte code is generated from source code, which is actually text and stored outside the Smalltalk image. The Smalltalk image has then source code management data, like an index in each method which points to its external source.

Typical Lisp systems are doing the same. They record the source code location for functions and other things. If you edit the source for a method in a typical Smalltalk environment, it will retrieve the text for the method and in a text editor you can edit the text then. In a typical Lisp environment, the Lisp system will present you the whole text file and just jump to the definition using the editor...


I'm well aware, I've programmed in Pharo daily for a decade; I Smalltalk for a living.

What I said it true practically speaking, you're getting into implementation details that don't make a practical difference. Lisp and Smalltalk are both image based systems, but Smalltalk'ers actually work on the running live image all of the time, they aren't ever booting from the sources file nor are they ever editing it manually (excluding the one file based GNU Smalltalk), it's pared with a changes file to enable the IDE to expose the current source of a method but those are hidden implementation details whereas I'm talking about the abstraction presented to the programmer. Lispers "can" do this as well, and sometimes do, but Lisp has a much more standard general workflow of editing files that are then read and launched to create/patch a runtime image it's not the normal worflow to work entirely in the REPL which is essentially what Smalltalk'ers do.

Consider the difference, what Smalltalker's see in their code browsers is a text version of what's actually running; if they had macros, which are nothing more than code generators, they'd be seeing is the macro expansion rather than the original source call to the macro. Certainly Lispers "can" do this as well, they can macro expand something to see what's actually running but as their primary mode is editing the original source they're accustomed to seeing the macro unexpanded, they might define an accessor like

    (name :accessor person-name
         :initform 'bill
         :initarg :name)
And from experience know that an accessor is a getter, setter, and backing instance variable but they don't see those things in their editors, they just know they'll exist at runtime; whereas a Smalltalker is accustomed to looking at the runtime where he actually sees the getter, setter, and and backing instance variable.

Macros just don't fit into Smalltalk; they've been added before, people keep trying it, and it just doesn't fit and so doesn't catch on.


> editing it manually

You edit the source code via the IDE. Just like in Lisp systems. It's just that the IDE works differently.

> Smalltalk'ers actually work on the running live image all of the time

That's the dominant way to work in Lisp, too. My Lisp Machine even runs it as an OS. I use LispWorks for development on my Mac - the IDE is the running Lisp system.

> it's not the normal worflow to work entirely in the REPL

Actually that's the default mode. If you develop Lisp code with SLIME / GNU Emacs, it talks to a live Lisp system.

> if they had macros, which are nothing more than code generators, they'd be seeing is the macro expansion rather than the original source call to the macro.

That's not how Lisp works. Macros are not simple code generators, where the workflow would be write macro code, expand that, and use the expanded code. The Lisp developer is not generating code and working with that generated code. The generated code is usually hidden and generated by the Lisp system on demand/incrementally for internal use.

> And from experience know that an accessor is a getter, setter, and backing instance variable but they don't see those things in their editors, they just know they'll exist at runtime; whereas a Smalltalker is accustomed to looking at the runtime where he actually sees the getter, setter, and and backing instance variable.

In this case the Lisp developer sees the generated objects: the getter, setter slot definitions. What the Lisp developer usually does not look at is the code generated by the macros. It might be useful to have source code for the macro forms and being able to edit them, but it is usually not done in Lisp. There are many macros, where there are no generated objects and the resulting code is extremely complex and large. Thus it would not make sense at all to present this code or edit this code...

> don't see those things in their editors, they just know they'll exist at runtime

I'll see it in the runtime. The introspective capability tells me that it is there. If I want to edit them, I would then go back to the source. For example when I call (ed 'foobar) to edit the accessor foobar, it would open up the defclass form in the editor. Some Lisp systems (like LispWorks)also offer the link back from the objects to the macro form which was responsible for creating them. What I usually don't want to see is the generated code, since that's not fit for human consumption.

> Macros just don't fit into Smalltalk; they've been added before, people keep trying it, and it just doesn't fit and so doesn't catch on.

Because they difficult to integrate into Smalltalk and its idea of an IDE. Macros add a lot complexity to the system and how the developers are using it. They offer code manipulation and the price is losing a direct connection of the object code / the objects from the source code.


> That's not how Lisp works. Macros are not simple code generators, where the workflow would be write macro code, expand that, and use the expanded code.

You're putting words in my mouth, and yes macros are just code generators. I didn't imply or say they're expanded and then devs use the expanded code. The whole point of them is you're not supposed to think about or see the generated code, the macro becomes the abstraction the dev works with.

> In this case the Lisp developer sees the generated objects: the getter, setter slot definitions. What the Lisp developer usually does not look at is the code generated by the macros. There are many macros, where there are no generated objects and the resulting code is extremely complex and large. Thus it would not make sense at all to present this code or edit this code...

Exactly the point I was making. You're actually agreeing with me though the tone doesn't come off that way.

> Actually that's the default mode. If you develop Lisp code with SLIME / GNU Emacs, it talks to a live Lisp system.

I said multiple times Lisp "can" work that way, stop being so defensive. But the point in fact is most Lisp developers don't work that they, they don't live in the REPL full time like Smalltalk devs do, you load from source quite often in comparison to the 0 times Smalltalk'ers do.

Look this isn't a competition; Lisp is technically superior to Smalltalk in features both due to macros and due to CLOS and its multiple dispatch OO system. Lisp was the inspiration for Smalltalk, Alan Kay goes so far as to call eval/apply the Maxwell's equations of computer science. However, I any many others having tried both Lisp and Smalltalk, prefer Smalltalk largely due to the syntax and dev environment. Emacs and Slime just don't cut it for us. Generic functions don't feel like object orientation, the way Smalltalk does OO just feels better and the dev environment feels better than anything else out there.


> I said multiple times Lisp "can" work that way, stop being so defensive. But the point in fact is most Lisp developers don't work that they, they don't live in the REPL full time like Smalltalk devs do, you load from source quite often in comparison to the 0 times Smalltalk'ers do.

That Lisp developers load source does not mean that the image gets restarted. Lisp usually is designed such that you can incrementally update/manipulate the running image. As I said that's the dominant development style for Lisp, AFAIK.

> the dev environment feels better than anything else out there.

That's what my friends who still use Symbolics Open Genera tell me all the time. ;-)


On living in the REPL full time and having a better environment, is there something you wish Smalltalk could do to become a better environment that it doesn't?

What would feel better?


Yes, several, it's slightly too dependent on the mouse for focus; I'd rather never touch a mouse. I wish it used native windows (this is specific to Squeak/Pharo) so my editor felt like any other editor rather than a window within a window that it currently is (some other Smalltalk's do this but they're windows based and I'm on Linux). I wish the code browser had a back button so I could go dig into the source of something and then come back rather than popping a new browser to investigate that thing. Even better I'd like bookmarks like you get from more modern IDE's where you can mark a few key spots you want to constantly be jumping between rather than having a different browser open on each one.


> Actually that's the default mode. If you develop Lisp code with SLIME / GNU Emacs, it talks to a live Lisp system.

So do you dump your image at the end? My understanding is that most people work with files, and execute stuff occasionally while they're editing, then save the files. Next time, you reload from source - you don't start from an image with state. And if you fail to run something, your source and your repl become out of sync.

I've never seen anyone actually doing image-based development in a lisp, despite some lisps supporting it - even in livecoding (music), the standard is to edit a file, then occasionally send parts of the file to a repl.


> So do you dump your image at the end?

For example if you deliver an application, that's what one usually does.

> My understanding is that most people work with files, and execute stuff occasionally while they're editing, then save the files

One works with files, but together with the running Lisp system. I would work with a mix of compiled (fast load) and source files. Let's say you work on a new version of your graphics editor. You start the Lisp image and load the current version into it - mostly from fasl files. Some years ago people would more often have started a dumped image, because it took some time to load the fasl files. Even today, if the program is really large, you might want to use a dumped image of some version of it.

But independent of how you reach there, you start Lisp and then recreate a state of the working image (either from a dumped image or from loading files), where you continue to work from.

The image then will have a the development information, debug information, application state, compiler info, ...

In Smalltalk you would typically load a saved image more often. In Lisp you would recreate the state from a base image and then fast-load the stuff you need.

> And if you fail to run something, your source and your repl become out of sync.

Yes that happens and current Lisp systems actually don't have a goal to keep that in sync. You have to check manually that the source or the compiled system actually loads and runs - means recreate the correct state.

> I've never seen anyone actually doing image-based development in a lisp, despite some lisps supporting it - even in livecoding (music), the standard is to edit a file, then occasionally send parts of the file to a repl.

In the sense of Smalltalk (image + managed source code) there are only few people doing that in Lisp - nowadays. That debate (managed code images vs. a mixed file/image model) was lost in the early 80s with some later attempts to create a manage source model with code in data bases.

In the sense of saving images as pre-loaded code, that's seen and especially for application delivery. The LispWorks IDE I'm using has a lot of stuff pre-loaded (with some on-demand loading) as a single image. If I develop something, it is loaded into it and the application then is a newly dumped image. LispWorks also supports some exotic stuff like regularly saved sessions. It also keeps track which sessions have been saved based on which other sessions.

http://www.lispworks.com/documentation/lw70/IDE-M/html/ide-m...


GNU-Smalltalk is not just text based, but file based. http://smalltalk.gnu.org/


> Smalltalk isn't file based, you don't edit some version of the code that gets compiled (and macro expanded) […] Macros simply wouldn't fit into Smalltalk in any meaningful way […]

You speak as if “macros” means “macros as implemented in the C preprocessor”. Lisp macros, as I understand it, operate on the parsed syntax tree, not on the file text level, and are expanded at runtime, not compile time.


The key to understanding macros is to be quite clear about the distinction between the code that generates code (macros) and the code that eventually makes up the program (everything else). When you write macros, you're writing programs that will be used by the compiler to generate the code that will then be compiled. Only after all the macros have been fully expanded and the resulting code compiled can the program actually be run. The time when macros run is called macro expansion time; this is distinct from runtime, when regular code, including the code generated by macros, runs.

http://www.gigamonkeys.com/book/macros-defining-your-own.htm...


Is it not the fact that macros can be defined and, more importantly, redefined at runtime? If so, would that not mean that macros must also be expanded at runtime?


If you have compiled code, you have to recompile the code when you change a macro. It's not done automagically.

If you have interpreted Lisp code, then the code can use the new macro automagically.


This is the view of a compiled execution. Lisp can also be interpreted from source as data. Then macro-expansion time is interleaved with execution time. Each use of a macro form may trigger a macro expansion.

Remember, a Lisp interpreter interprets Lisp source as data. Not byte code. Unlike Python, Java, Smalltalk, ... which all have popular implementations which compile to byte code and execute that byte code in a byte code interpreter, aka virtual machine.

Let's use a Lisp interpreter, here from LispWorks:

We define a primitive MY-IF macro. It expands into a simple IF use. But the macro will also count the number of macro expansions.

    CL-USER 46 > (defparameter *myif-counter* 0)
    *MYIF-COUNTER*

    CL-USER 47 > (defmacro my-if (c a b)
                   (incf *myif-counter*)
                   `(if ,c ,a ,b))
    MY-IF

LispWorks can trace macros, too.

    CL-USER 48 > (trace my-if)
    (MY-IF)

Now a simple function which uses our macro:

    CL-USER 49 > (defun fac (n)
                   (my-if (= 1 n)
                          1
                          (* n (fac (1- n)))))
    FAC
Now we use it and we will see the trace information for the macro use: you see the incoming form and the result form.

    CL-USER 50 > (fac 2)
    0 MY-IF > ...
      >> COMPILER::FORM        : (MY-IF (= 1 N) 1 (* N (FAC (1- N))))
      >> COMPILER::ENVIRONMENT : #<Augmented Environment venv NIL fenv ((#:FUNCTOR-MARKER . #<COMPILER::FLET-INFO (# # # #)>)) benv NIL tenv NIL decl NIL>
    0 MY-IF < ...
      << VALUE-0 : (IF (= 1 N) 1 (* N (FAC (1- N))))
    0 MY-IF > ...
      >> COMPILER::FORM        : (MY-IF (= 1 N) 1 (* N (FAC (1- N))))
      >> COMPILER::ENVIRONMENT : #<Augmented Environment venv (#<Venv 275415194600  N>) fenv ((#:SOURCE-LEVEL-ENVIRONMENT-MARKER . #<COMPILER::FLET-INFO (NIL . #)>) (#:FUNCTOR-MARKER . #<COMPILER::FLET-INFO (# # # #)>)) benv NIL tenv NIL decl NIL>
    0 MY-IF < ...
      << VALUE-0 : (IF (= 1 N) 1 (* N (FAC (1- N))))
    0 MY-IF > ...
      >> COMPILER::FORM        : (MY-IF (= 1 N) 1 (* N (FAC (1- N))))
      >> COMPILER::ENVIRONMENT : #<Augmented Environment venv NIL fenv ((#:FUNCTOR-MARKER . #<COMPILER::FLET-INFO (# # # #)>)) benv NIL tenv NIL decl NIL>
    0 MY-IF < ...
      << VALUE-0 : (IF (= 1 N) 1 (* N (FAC (1- N))))
    0 MY-IF > ...
      >> COMPILER::FORM        : (MY-IF (= 1 N) 1 (* N (FAC (1- N))))
      >> COMPILER::ENVIRONMENT : #<Augmented Environment venv (#<Venv 275416002360  N>) fenv ((#:SOURCE-LEVEL-ENVIRONMENT-MARKER . #<COMPILER::FLET-INFO (NIL . #)>) (#:FUNCTOR-MARKER . #<COMPILER::FLET-INFO (# # # #)>)) benv NIL tenv NIL decl NIL>
    0 MY-IF < ...
      << VALUE-0 : (IF (= 1 N) 1 (* N (FAC (1- N))))
    2
Let's see how often our macro function has been used to expand code:

    CL-USER 51 > *myif-counter*
    4


My understanding is that typically, Lisp macros are expanded at 'compile' time. The compiler expands macros recursively until there's nothing left to expand and then compilation proceeds normally.

A Lisp macro operates on data structures. Macros transform one data structure into another data structure. Lisp code is a data structure (typically a list) and ultimately all macros get resolved by expansion to ordinary Lisp code and the expanded code then is compiled/interpreted.

Unlike C, Lisp macros have access to all of Lisp (including other macros) during expansion. So a macro's expansion can be determined programmatically by executing Lisp code...this part is where it gets harder to understand macros intuitively because the code called during the macro expansion phase does not have access to the code that runs at the runtime and vice versa.

That's why it is handy to think about macros as manipulating data structures...we don't expect data structures to return values or generate values in the environment by execution.


I find symmetries a very necessary concept. It's a complexity divider. A bit like self similarity or inductive reasoning, it turns chaos into cosmos.


> What most of these languages seem to miss is that Smalltalk’s class system, like Lisp’s macro system, is a symptom of the power already available in the language, not its cause. If it didn’t already have it, it wouldn’t really be that hard to add it in yourself.

What most of these articles seem to miss is that that Java's designers were themselves expert Lispers and Smalltalkers, and they most certainly realized all that, and that Java's success is a consequence of them understanding exactly why not to repeat the same design. Design doesn't live in a vacuum. Design is shaping a product not just to fit some platonic ideal, but reality, with all its annoying constraints.

To understand why Lispers and Smalltalkers designed Java the way they did, I recommend watching James Gosling's talk, How The JVM Spec Came To Be[1], and the first 20 minutes or so of Brian Goetz's talk, Java: Past, Present, and Future[2].

[1]: https://www.infoq.com/presentations/gosling-jvm-lang-summit-...

[2]: https://www.youtube.com/watch?v=Dq2WQuWVrgQ


Gosling was an expert Lisper? I only heard that he developed a strange/tiny Lisp variant called Mocklisp as extension language for his Emacs editor.

> Design doesn't live in a vacuum.

Java was designed as a modernized/slim replacement for C++ when developing set-top boxes and PDAs. What SUN took from Lisp and Smalltalk in some limited form was the runtime: managed runtime with GC, code loading, typed objects and a virtual machine. VMs were thought as an advantage on machines with little memory, because of compact code representations. Various Lisps and also Smalltalk had that. But that was mostly it. The language level wasn't influenced by Lisp at all: no Lisp syntax, no Evaluator, no lambdas, no code-as-data, no macros, no support for functional programming, ...

https://en.wikipedia.org/wiki/Oak_(programming_language)


That's precisely the point. Watch the talks I linked to. Gosling presents Java's design as a wolf in sheep's clothing. They figured that the features most important in Lisp and Smalltalk are memory safety, GC, dynamic linking and reflection, shoved all of them into the JVM, and wrapped them in a non-threatening language that could actually gain significant traction. They did that because they realized that the linguistic features are not where most of the power lies. That's what good design looks like: you hide the power in a palatable package.


This is completely wrong. Guy Steele (who is an expert Lisper as opposed to James Gosling) has hinted, numerous times, that Java was a compromise.

Here is a quote of his:

"And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?"

Regarding Java's "success" (which falls in the same category as PHP's success, Python's success, Javascript's "success" and so on) I urge you to consider it as a classic example of "Worse is Better".

Lisp (and Smalltalk and Erlang and Forth and ..) do not have mass-market appeal because they do not easily hand out a feeling of immediate rewards, that a lot of newbie programmers find so attractive. They require more upfront investment from the user before they unveil their secrets, before one "gets it".


> has hinted, numerous times, that Java was a compromise.

How does that make what I wrote completely wrong? Good design is a compromise. Gosling presented Java's design as a wolf in sheep's clothing. They figured that the features most important in Lisp and Smalltalk are memory safety, GC, dynamic linking and reflection, shoved all of them into the JVM, and wrapped them in a non-threatening language that could actually gain significant traction. That's what good design looks like.

> Regarding Java's "success" (which falls in the same category as PHP's success, Python's success, Javascript's "success" and so on) I urge you to consider it as a classic example of "Worse is Better".

Eh. Unlike PHP (and maybe Javascript and Python, too), more useful good software has been written in Java than in any other language in the history of computing, with the possible exception of C. I don't know by what metric -- other than personal aesthetic preference -- you'd consider it "worse" (or, conversely, what your metric for success is). Remember that Java was designed to be a conservative language for industry use. In his article outlining Java's design[1], Gosling writes: "Java is a blue collar language. It’s not PhD thesis material but a language for a job. Java feels very familiar to many different programmers because I had a very strong tendency to prefer things that had been used a lot over things that just sounded like a good idea." I think it is funny to doubt Java's success considering its stated mission, goals and non-goals. Smalltalk also tried to become a commercially successful language. I think it is equally funny not to see it as a failure in that regard, which was certainly among its goals. The extensive work done on Smalltalk (Self, really) at Sun and elsewhere was quickly absorbed by Java, and so Smalltalk has certainly achieved success in enabling Java.

[1]: http://www.win.tue.nl/~evink/education/avp/pdf/feel-of-java....


Well, unfortunately, the features most important in Lisp and Smalltalk are not memory safety, GC, dynamic linking and reflection.

Which is one reason Java is a shitty language. It may be popular, "a blue collar language" but it's not sitting on some apex of programming languages, and it's certainly not Art. Which makes sense if you consider that the vast majority of programmers working today are not artists or craftsmen, but little more than commoditized manual laborers. I also include the "engineers" working at such perceived bastions of engineering excellence as Google here [1].

[1] https://news.ycombinator.com/item?id=14066898


Well, that's your opinion. I'm not sure by what metrics you think languages should be ranked. There's certainly no evidence that any other language results in better software, or that linguistic features have a significant bottom-line impact at all. Whatever few studies we do have indicate that the coice of language makes little difference, certainly ehen it comes to large software. Of course, language designers and PL enthusiasts make all sorts of claims, but they're largely unsubstantiated at this point.


Lisp and Smalltalk actually suffer from the same problem: late-binding sucks. When I was in college a professor once pointed out to me that he didn't know of an LL(1) parser for Smalltalk. There's a reason for that: Smalltalk's syntax is late-bound! It's almost like Forth's syntax: the reader consumes words and decides what to do with them on the spot, whether they represent variables, operators, constants, or parts of a message send and once it has a subject, verb, and objects, dispatches the message also on the spot.

This plays havoc with your ability to do static analysis, and languages that hinder static analysis should not be used in real-world systems. If the earliest you find out about errors is in a running system, it's far too late and you are hosed.

This is why the Lisp and Smalltalk Evangelism Strikeforces have been met with decades of failure, while the Rust Evanglism Strikeforce is getting on with a massive project of digital tikkun olam.


> This is why the Lisp and Smalltalk Evangelism Strikeforces have been met with decades of failure, while the Rust Evanglism Strikeforce is getting on with a massive project of digital tikkun olam.

If that were true, then Javascript, Python, Ruby, PHP, etc would also have failed. Smalltalk and Lisp failed to become popular in the modern world for reasons having nothing to do with late binding.

How about wait until Rust is at least as widely used as Ruby before going on about how much of a failure Smalltalk and Lisp are. Let's see if Rust stays around as long as Smalltalk & Lisp have, or whether it has that kind of influence on other languages.


There are alternatives that make static analysis look like the suboptimal approach.

https://pointersgonewild.com/2015/09/24/basic-block-versioni...


Static analysis is not just about inferring types and hot paths for optimization. For that kind of stuff, a dynamic analysis is most of the time way better (lots of JIT compilers with speculative optimizations prove this point). There is another goal where static analysis shines: verification. If I'm writing a somewhat critical application, I want to make sure that it behaves according to my intent in all cases. For example, if I'm writing an airplane sofware, I want to make sure that it will at least never invoke any undefined behavior (this is done in ASTREE project). Static analysis is a powerful tool to give such guarantees in many cases.

Some highly dynamic language features make analysis really imprecise or really hard (in terms of computation cost). There has been quite a lot of work on making static analyses that can handle such language features (for example control flow analysis helps analyzing code that uses dynamic dispatch or closures a lot but cost of the analysis is exponential in terms of the precision level most of the time). Sometimes people tackle analyzing highly dynamic languages like JavaScript but at a huge time cost in certain cases [1]. I'd prefer using a language designed with static analysis in mind if I were to prove certain properties about my code.

[1]: http://www.cs.ucsb.edu/~benh/research/papers/dewey15parallel...


Right, but an important point here is "some highly dynamic language features make analysis really imprecise or really hard". Not all "late bindings" are born equal. JavaScript's dispatch is very different from Java's from a static analysis perspective, the latter being almost indistinguishable from pattern matching.


I totally agree with that. When I mentioned dynamic dispatch, I had something like Scheme or JavaScript in my mind. Dynamic dispatch in those languages require a more subtle analysis to obtain precise results. I wish I could edit my parent comment to clarify my example.

To clarify my position (hence my parent comment's point) on this general matter, I'm OK with any language feature that is amenable to static analysis in a practical amount of time. This puts me closer to PL conservatives in Steve Yegge's spectrum.


I think the standard counterargument is that if unit testing is ubiquitous, we're already admitting static analysis is at best very incomplete.


I think some guys actually proved that in the 1930s. But so what? Program analysis and testing complement one another.


I wouldn't be so sure. ACL2 is written in LISP. http://www.cs.utexas.edu/users/moore/acl2/


> Lisp and Smalltalk actually suffer from the same problem: late-binding sucks.

That's a feature, not a bug. Late binding rocks.

> This plays havoc with your ability to do static analysis, and languages that hinder static analysis should not be used in real-world systems.

The real world is full of late bound languages; much of the internet runs off late bound languages including this site. There's a million Rails and Python apps out there, so basically this "opinion" of yours is not bound to reality.

All of biology is late-bound, cells communicate via message passing, so the oldest and most complex real world systems we know of are late bound. To dismiss late binding is naive at best.


> That's a feature, not a bug. Late binding rocks.

So much that you can see ripples everywhere of late binding languages being slowly (not saying the transition is complete) replaced by static languages. Even the one true bastion of late binders, web development, is seeing massively increasing adoptions of languages like Typescript on the frontend, and languages like Go on the backend (see adoption at Youtube, Dropbox and so on).

Outside of web development, and simple trivial admin scripts, the other major source of late bound software was.. Apple Objective C. Which is getting replaced by Swift, a language that heavily favors static typing and functional paradigms.

> There's a million Rails and Python apps out there, so basically this "opinion" of yours is not bound to reality.

There's a million of trivial CRUD apps that don't do much of worth and whose death the world would not really mind either. DHH, rails's author, didn't mind restarting his basecamp servers 400 times a day because of a memory leak. These people are not software engineers. They're cave men using glue other people made to tie together rocks to build stonewalls. Which will then fall as soon as the weather stops being nice. Security, reliability, performance, what do they know about any of these things? But hey, you can do cute things like 3.days.from_now, what a great framework!


You're all over the place here and your arguments make no sense whatsoever when examined.

First, you conflate mass-appeal with some sort of objective "better" criterion which is of course bonkers. To use one of your own examples against you, there are hundreds of thousands of Java monkeys out there that are using glue other people made to tie together rocks to build stonewalls. Which do fail as soon as the weather stops being nice. Security (you should look into Java deserialization bugs), reliability, performance what do they know about any of these things?

Second, you conflate late-binding as present in Lisp and Smalltalk with late-binding present in other dynamic languages. The two are not equivalent, a perfect example of the whole is greater than the sum of its parts.

Lisp and Smalltalk will never become popular (read my previous comment), but that does not mean that they do not sit on an apex and still have a lot to give. To anyone interested in the "craft of programming", "the Art", there is nothing better period. Here are some references for you, from the masters themselves:

[1] https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

[2] https://www.youtube.com/watch?v=YyIQKBzIuBY

[3] https://www.youtube.com/watch?v=FvmTSpJU-Xc


There's tons of R and Python code in scientific computing that's not being replaced by static languages. Anyway, dynamic languages have been around since the 60s. This debate is very old. Trends in one direction or another swing back and forth. If you're going to mention Go, Swift or Rust, what about Elixir or Julia? They're new languages, too.


We get it, you're a static typing fan, this debate is as old as the hills, but late bound dynamic languages are not going anywhere, are not being replaced, and will continue be popular and make money because what you don't seem to get is that we don't all agree static typing is the golden hammer you seem to think it is. Yes, those people are software engineers despite your unwarranted superiority. And Rust... please, in time maybe but right now it's hardly used and can not remotely be called a popular language, not in comparison with the popular dynamic languages which dwarf it in usage by orders of magnitude.


We need only enough static analysis to reliably go from machine language on a silicon CPU to robust late binding. After/above that, fuck it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: