I've managed today to boot OpenGenera in a Linux VM. It took me a couple of weeks but it finally works. I intend to have a look myself at what the fuss is all about.
If you're interested, here are some pointers to relevant resources (snap4.tar.gz image works as advertised):
The opengenera tar-ball can be found in the intertubes.
Tip: Use Ubuntu 7.04 x86-64 to save you some trouble. I don't know what is the most recent distro that works but 10.10 doesn't. You get a blank window because recent X servers are missing something.
Before proceeding with the installation, complete the Ubuntu image with relevant archive server URLs to be able to apt-get with ease.
It would be great if someone (anonymously) could get a working OpenGenera VM and upload the image so that people could skip all these steps and try it out immediately.
Who is the legal owner? I never heard what the final resolution of the probate process was.
EDITED to add:
Oh, I guess David Schmidt owns all of it now [0]: Symbolics is currently a privately held company which acquired the assets and intellectual property of the old public company called Symbolics, Inc.
According to a YouTube comment made 8 months ago by Kalman Reti — the 'last Symbolics developer'— Symbolics IP is owned by John Mallery.
"The problem is that the Symbolics IP is now owned by John Mallery; he has stated he has plans for making it available but so far (several years) has not yet done so. Until he does so, only two groups (that I am aware of) have access through prior contracts with the previous owner of the IP, namely MIT (and anyone officially associated with it) and customers of the Symbolics maintenance business (and independent company) run by David K. Schmidt."
I bought my MacIvory from David Schmidt about 4 years ago. At that time, it was my understanding that David only owned the maintenance contracts from Symbolics as well as the ability to sell Symbolics hardware. The intellectual property for Genera was owned by John Mallery (http://www.csail.mit.edu/user/926)
What's the fuss all about? A lot, but please bear in mind the time this was developed. We are now in 2014. We are talking about something about 30 yrs old. A good way to really compare is to go compare with an Apple ][.
It certainly should not be compared with an Apple ][: the last Symbolics machine was introduced in 1992, the NXP1000. They were still adding features and such at that time, and the Apple ][ was long obsolete at that time.
There are some enlightening comments from Kent Pitman on comp.lang.lisp, which give a sense of what kinds of features the Lisp machines provided to developers, and why they still haven't been matched by contemporary systems:
Yes - I'm aware about what this is about. From the point of view of a developer, all demos show something that is missing in today's tools.
To compare it to Apple ][ is a bit too stretched. Lisp machines were developer machines. Machines for professionals in computing technology. We don't have that today. For example, what is the equivalent package or distro that transforms your PC in a "developer machine". You don't have anything like that. All you can do is stitch together some emacs/vi/gdb whatever and search some blogs when you're stuck. And beyond that you are not able to go and fix some IDE bugs _live_.
Today we're missing a machine for professional programmer.
I think multimedia computing, as introduced by Amiga did more for AI, speaking in terms of games, than the Lisp Machines. I do think the jack of all trades approach is useful for sharing data across computers and professions.
It's sad to see the demise of the professional multimedia computer to tablets and tablet-laptops.
I own a Symbolics Lisp Machine (a MacIvory). I'm in San Francisco and would be happy to show it to anybody who's interested and in the San Francisco Bay Area.
Be warned, I'm still learning how to use Genera, so we'd be learning how to use the system together.
The real ergonomics of the LispM were due to the wonderful Microswitch keyboards, which have never been equalled since. To use one was a kind of revelation into the the apotheosis of keyboard use.
(The old, original Tom Knight AI Lab keyboards were even slightly more wonderful; the later Microswitch LispM keyboards never quite equalled the originals in terms of tactilely satisfying feel.)
(For a long while after leaving MIT (ran the MIT-EECS LispM and DEC-20 machines) I used one of the custom-made Lawrence Livermore run of Microswitch keyboards, driving a custom 68K board which turned the up/down events into standard RS-232 for use with standard CRT terminals. I think I still have that keyboard somewhere in the now-long-abandoned kids' play junk.)
Those were nice keyswitches, but the modern Cherry MX is also excellent. I would be hard pressed to pick a favorite between the two of them. The Cherries have a lighter touch and a little "snap", both of which I like.
It was not just the hardware, or the software, or the culture, or the interesting problems you could solve, or the zeitgeist of that time in history, but a rich combination of all those things and more, that is so hard to capture, describe or reproduce -- or even believe, if you haven't experienced it first hand.
Those giant keyboards, with all their wide special purpose buttons topped with hieroglyphic keycap labels, in combination with the huge screen, three button mouse, and of course all the great software turning on and off the little dots on the screen that you could dive into, explore and modify at will, the printed and online documentation, the networked developer support community, all carefully designed to work together seamlessly regardless of cost, gave you the feeling of being in control of a very heavy, expensive, well built, solid, powerful, luxury automobile, with rich Corinthian leather seats, a high fidelity sound system, cruise and climate control, power steering, windows and seats, a full tank of gas and an empty ashtray, a glove compartment stuffed with AAA maps, about to embark on a long and adventurous journey exploring far away places you've never visited before.
It's hard to capture that kind of multi-sensory ergonometric computational Fahrvergnügen, just running it in an emulator on a MacBook Pro. (As luxurious and well designed as the MBP is, it can't hold a candle to a Lisp Machine.)
Sitting down in front of one of those monsters with a big mug of coffee, an interesting task to work on, a rack of my favorite music on cassette tapes, many hours of free time to hack in the privacy of an air conditioned lab in front of me, and a bean bag chair for naps when I pass out from exhaustion behind me, will always be one of my cherished memories.
Ricardo Montalban couldn't have expressed the luxurious feeling of power and control more eloquently: "I have much more in this small Chrysler than great comfort at a most pleasant price. I have great confidence, for which there can be no price. In Cordova, I have what I need."
https://www.youtube.com/watch?v=Vsg97bxuJnc
> It was completely programmable with all source code. It was also not for 'playing' around - for that it was too expensive. As a software developer you could focus on your task and the whole operating system was supporting you. There was no piece of software that was not accessible in a few mouse clicks. Everything could be inspected, everything was up for modification. Software was live and dynamic. Not dead and static like today. There was no boundary between software development and software usage.
This reminds me of Smalltalk environment like Pharo. I recently realised that there were so much things in common between Lisp and Smalltalk environments (by Lisp environments I mean what we have today like SLIME+Emacs or LightTable). I also think that LightTable has a huge potential for being a successor of Lisp machines and _really_ integrated development environments.
Pharo runs on top of something, running on top of something else, ...
Genera on the Lisp Machine runs on the metal. It is the process scheduler, it does handle the bus interrupts, it receives the network packets, it writes the bytes to the disk controller, it sets the bits in the graphics card, it writes the sound bytes to audio interface, the network packets are Lisp arrays, ...
Yes, but don't forget the microcode level that runs the Lisp code. Some very small bits of the OS were written in microcode (as few as possible), IIRC. (It's been a long time since I was hanging around the AI Lab. ;-)
The processor instruction set was implemented in Microcode. It was optimized to run compiled Lisp code. There was also a Lisp to Microcode compiler, though I haven't used it.
The microarchitecture of the MIT/LMI/TI machines wasn't particularly complicated, it had pipelined 3-address load-store instructions just like the first RISC CPUs. Apart from the functions to handle each Lisp instruction the microcode was basically a simple RTOS. In my view, the clever part of the design was in picking the minimal features needed from the microcode to allow everything else to be writtin in Lisp.
There was a proposal from Sun for a Java OS that seemed to me to copy the same split.
It is a pity that all the Ivory papers that were in journals still seem to be behind paywalls.
The things that stood out to me were that a) boy, it was a slow system. Sure vi etc are more spartan, but even back then I'd imagine the were significantly faster than this Lisp machine. b) mouse was clearly the hot new thing, it is used lot more than what I think would be optimal.
I suppose I meant that they probably were bit ahead of their time and overly ambitious. Performance and especially latency are critically important for interactive applications. The designers must have been fully aware of the performance characteristics and hardware limitations at the time, but still they decided to ship such system. Maybe they should have taken a look in the mirror and notice that the hardware is not ready for what they were making and downscaled it to fit, then incrementally grow those features back when they become more practical.
The goal was not to have a fast vi, for writing applications in a slow way. If you wanted vi, there were other systems.
It was a system for research of development of advanced software, often with complex GUIs. Symbolics sold for several years into the CAD and 3d-graphics markets.
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
I'd love to play with a Lisp Machine. I live in London, UK, so sadly I can't see any way of using one (either a physical device or a VM). There are pirate copies, but I don't want to go that route.
What lisp machine articles often miss is contrast to other related projects.
Complete introspectable systems: How does the experience compare with using Pharo Smalltalk today? Sure, it doesn't provide a kernel, but it's a pretty complete* system that's very reflective and open to modification.
Running a lisp userland: There are Common Lisp replacements for Emacs, CL window managers, and one or two Lisp Machine style GUI libraries (CLIM). However, most Lispers seem to be happy using other WMs and Emacs. Do the CL applications miss something that the Lisp Machine environment provided, or were the alternatives more compelling somehow?
Other Lisp Machines: The MIT CADR is open source and available online[1]. Lisp Machine articles seem to focus on Symbolics software, what is that the CADR lacks? rms allegedly reimplemented many Symbolics features on MIT Lisp machines.
I'm often struck by how many Lisp Machine features have been implemented on other systems (e.g. CLIM, versioned file systems) yet haven't gained many users. There must be stories here.
*: Of the developers I've met, Emacs hackers seem to live in Emacs more than Smalltalkers in their image. For example, there are multiple Emacs twitter packages, but I've not seen any applications (only libraries) for tweeting from inside a Smalltalk image. I'm not sure what this says about the respective environments.
> How does the experience compare with using Pharo Smalltalk today?
Genera is a full operating system running directly on Lisp Machine hardware. A CPU which runs a stack machine. The network code goes down to the packets and the Ethernet card driver.
> There are Common Lisp replacements for Emacs, CL window managers, and one or two Lisp Machine style GUI libraries (CLIM).
A few thousand developer years is the difference. Sophistication. Polishing. Applications.
> rms allegedly reimplemented many Symbolics features on MIT Lisp machines.
He didn't. He tried to help LMI, but stopped soon. Symbolics, LMI and then TI developed a lot of more software. Much more.
> I'm often struck by how many Lisp Machine features have been implemented on other systems (e.g. CLIM, versioned file systems) yet haven't gained many users. There must be stories here.
Versioned file systems existed before and after Lisp Machines. For example DEC's VMS had a versioned file system. CLIM was the attempt to develop a portable standardized version of the GUI library of Symbolics. It failed to gain real traction: too different, not very polished, everybody had already a different GUI toolkit, ...
Here is an example of using a high-end Lisp machine application for 3d graphics:
> I'd love to play with a Lisp Machine. I live in London, UK, so sadly I can't see any way of using one
Well, you're in luck! Peter Paine of Abstract Science has (as of 2010) a substantial inventory of Symbolics hardware in deepest, darkest Kent: http://www.asl.dsl.pipex.com/symbolics/
From the website: "Location: 50 miles E.S.E. London UK (1/2 hour from continental port Dover and Channel Tunnel train at Ashford). Served by mainline train (65 minutes from central London) and M2 motorway."
There is also an emulator for the TI Explorer [1] that maybe gives more of the feel of using the development environment.
I still hope to be able to run the final version of the LMI software environment on the CADR emulator but haven't had much time to work on it recently.
Well, the 8-bits, Lisa, Macintosh, and Windows would have still been with us. So, I guess AT&T Sys V and BSD would have not happened which gets rid of Sun and makes one wonder what Dr. Tanenbaum and Mr. Torvalds would have worked on. I guess the second vector would have been what Steve Jobs would have built after leaving Apple (NeXTSTEP, the ultimate user friendly LISP machine?!?). Maybe Dylan would have worked in its infix form.
I would imagine that intelligent people would have worked at getting the same problems solved in a different form.
[edit] Instruction Sets from Intel, IBM, MIPS, Motorola would have been really different. Personally, I wanted the Forth machines to rule :)
Actually they were. The Interface Builder was sold also for the TI Lisp Machine - the TI MicroExplorer running in a Mac. Expertelligence developed the access to the Macintosh Toolbox for TI and also ported the Interface Builder to the MicroExplorer. Thus you could develop Mac-style user interfaces on the TI Microexplorer, including using the Interface Builder, which ran on the Lisp Machine and talked to the Mac for the UI display/interaction.
Here is a video which shows it running on a TI Micro Explorer, a Nubus Board in a Mac II.
I personally think it would have proceeded at a slower pace, and would have been more expensive.
The tradeoffs may or may not have been worth it, but there seemed to be an obsession with doing it Right rather than doing it cheap and quick. Even today it would be financially beneficial to buy a $70k workstation over a $2k workstation for a developer if there was a significant boost to productivity; each developer costs you well over $70k per year, and you can't double your productivity by hiring twice as many developers.
It takes a lot of attention to detail to make software like the lisp machines. Most free software projects stop at "Good Enough" so we end up with e.g. slime.
It is a bit more nuanced than that, ARPANet was a lot of DEC machines, USC and USC-ISI had a number of DEC-10's on the network. Usenet was however primarily a bunch of UNIX (or UNIX like) machines. They traded in their modems for nifty connections to the Internet and carried a lot of the same protocols with them. Much of the Internet protocols were developed in college labs with the ability to 'plug in' different UNIX configurations (the BSD sockets stuff) and since the code was tested well on UNIX was often the operating system on the machine that connected someone to the emerging Internet at the time. Lisp systems were popular for research, but were not used extensively for networking research. I expect it was in part a compatibility issue with BSD's socket stuff (where a lot of research was happening).
A more interesting alternate history for me would be if the folks doing system research had settled on a functional language like Haskell rather than a procedural one like C, what would our APIs look like today. I say that because I suspect that using C as a the language of choice was more influential than the OS of choice.
Lisp Machines were usually networked machines. Their were two branches:
* Interlisp-D from Xerox were smaller machines, networkd with a server + printer. PARC developed networked collaborative software for them, the first remote GUI was written for Interlisp, the first IMAP client was written for Interlisp-D, ...
* the MIT Lisp Machines were developed with CHAOS, an Ethernet-based protocol. TCP/IP was available early. Many of the TCP/IP protocols were implemented for them (often client and server): chat, terminal, mail, RPC, NFS, X11, DNS, HTTP, remote printing, remote booting, ... They also ran DECNET and some IBM network protocol. Actually the networking code from Symbolics was quite sophisticated in some respects...
BBN (the Internet company at that time) also used a lot of Lisp Machines. They had some distributed remote object networking substrate, which also ran on Lisp Machines.
Lucent had developed an ATM network switch, which first versions were based on multiple embedded Lisp Machine boards for the switching code and another Lisp Machine for the control and administration. They ran all kind of fancy networking code on them in a zero downtime fashion with live software upgrades. They were for example thought as large network and telephony switches.
One small correction: the D machines weren't all smaller; there was the Dandelion (later sold as the Star), the Dolphin and the Dorado (which was a full rack, the size of a 32-bit CADR machine, and built of ECL logic). There were at least three different standard microcoded environments you could boot with, Interlisp-D, Smalltalk and Mesa. When I was at PARC if I worked late I could connect to a Dorado, but most of the time I used a Dolphin (plus my group had some Dandelions which ran some custom microcode). They could also run Alto microcode by the way.
I preferred the MIT lispm environment better because I grew up with it. For a while I had a job where I had both a dandelion and a 3600 in my office; later I had two 36xx machines, one with a color display.
They were pretty fast for their time. The late, hyper-dynamic window system was probably too heavyweight for its time, especially when later translated over X, but I generally used the simpler base window system because it was faster.
It was the most productive (in terms of amount of useful code generated per unit time) system I have ever used and I still miss it. The Interlisp D, though quite different to use (and in some ways better), is a close second.
Right, but the large one weren't sold, IIRC. Xerox sold the smaller ones, not the Dorado. That was also why some users upgraded to the Lisp Machines from LMI, Symbolics and TI: they had larger address spaces and could run larger software.
> Lisp systems were popular for research, but were not used extensively for networking research. I expect it was in part a compatibility issue with BSD's socket stuff
They were not used for networking research, AFAIK, but Lisp Machines were definitely very well networked, speaking many protocols which soon came to include IP and TCP (though they started out with MIT's CHAOSnet). I was using Lisp Machines (both Symbolics and TI) networked with Sun 3s in the mid-to-late 1980s, and they definitely interoperated.
But I think that was a few years later than the networking research you're referring to, that created the Internet protocols.
The folks doing systems research weren't about to settle for something like Haskell, precisely because it's way too hard to do systems work in it. C is the way it is for a reason.
And a whole lot was done on PDP-10s, which were in many ways the first Lisp Machines. DEC asked various groups including AI researchers what they needed when designing the PDP-6, which was essentially the PDP-10 prototype, and its 36 bit word size (common back then, because that was enough to encode 10 decimal digits, the pre-computer standard for scientific calculations) was matched with a 18 bit word addressed address space, so one word could be a natural CONS cell, and there were useful instructions for that.
In the fullness of time, long after 1963, 18 bits of 36 bit words totaling 1 MiB of 9 bit bytes was crippling, but that much memory was at the time unimaginable. MIT's proposal some time later to have a full address space of memory built was a big thing, some said it couldn't be done.
Which gets into one big difference between Xerox PARC Altos and their software and UNIX(TM) for the first decade or so: they were seriously constrained by memory. Altos were 16 bit machines, also word addressed, so a total of 128 KiB, although they had a bank switching feature. The bigger PDP-11s had a split Instruction and Data (I&D) feature so that a program could have 64 KiB of code, and due to the MMU, 58 KiB of data and 8 KiB of stack, also for a total of 128 KiB, although you could get fancy with overlays as BSD 2.x did to support TCP/IP.
I believe this resulted in significant differences in software and system design, e.g. part of MIT's going for The Right Thing was using systems with large (for the time) address spaces. Whereas the smaller systems PARC and UNIX systems required more compromises, although occasionally that had good results, e.g. UNIX pipelines and the conventions that developed from them.
Lisp Machines would have become cheap. Similar software divergence would happen. There would be an increased separation from normal people and programmers. Lisp would become a serious thing used by all programmers.
Or maybe another thing like UNIX would've replaced Lisp Machines quickly.
Security was literally nonexistent. If you were at the console, you had control of the machine. Pwning one over the network was probably not difficult, though it wasn't the kind of thing people spent much time on back then.
Is that a bad thing? I wonder sometimes how much we lose in develop collaboration due to security layers. I can imagine running a cool multiuser image in a prototyping language like to.
If security is important, you lose something in terms of getting shit done. It's up to you or your organization what the tradeoff is. Fun fact: At one company I worked at, it was a matter of security policy that every machine in the company run the Bit9 binary whitelisting software. Every. Single. One. Including all the dev machines. In addition, it was decided that valuable corporate assets such as source control be airgapped from the outside network, making it difficult to run a dev machine that could access both the repo and the Web where all your documentation is. But Security was determined to be Priority One at that company, so productivity had to take the hit. If you develop the tech ten times slower and with ten times greater annoyance to your engineers, that was a worthy price to pay for not having it stolen by scary foreign spear phishers.
But given the level of expertise which would have supposedly been their target market, wouldn't this have been an ideal environment for security analysts to develop within?
If you buy into the sales pitch, highly trained teams of developers would be properly outfitted with workstations well-suited to roll their own crypto, for development purposes, and then easily test against a wide array of complex scenarios.
I don't see the relevance. I, too, can easily find dead-end hardware that was expensive years ago that no one wants today. Most are just happy someone is taking that crap out of their hands. It's big, heavy, has strange power requirements, people forgot how to even turn it on and get it running, etc.
Current hardware wouldn't contemplate supporting only a single, oddball (even then) language.
(Although the C spec is carefully written to allow C-on-Lisp - e.g. by making it illegal to compare pointers not pointing to the same array, allowing a C-"object"-per-Lisp-object implementation - real C code tends to make assumptions.)
It was a problem, as I recall, but not nearly as big a problem as the fact that people stopped buying the machines :-)
Actually, it turned out -- though I never managed to communicate this to potential users -- Zeta-C and Symbolics C served different use cases. Symbolics C was best for porting entire C programs; Zeta-C was best if you had a library written in C and wanted to call it from Lisp.
I use Inferno to develop software, as a virtual OS over top of Linux or Windows. Its community doesn't see any value in writing software just to make it easy to use for newbies. It is still actively maintained, with new changes from Plan 9 development. There are even some new software tools developed in it (eg, I wrote a build tool). Maybe this qualifies?
I tried web.archive.org/save/http://<url>, but I got 'Access Forbidden for URI http://lispm.de/symbolics-lisp-machine-ergonomics'. The site itself loads fine in both firefox and chromium, but doesn't work with wget/curl. Probably some protection against robots / automated downloaders.
"It was also grounded in the believe that software development is best done by small groups of very good software engineers working in a networked environment."
Wow, what a quote. Now can we repeatedly beat that into management's head.
I used this keyboard, as well as the ones for MIT-AI's graphics terminals from which it was inspired, and a 2nd generation of those (long story), at the same time I was using "standard" keyboards including the IBM PC's, and if you have the ability to ignore keys you don't need, you can focus on the home keys and Control and Meta (and ideally use the key left of A as rubout/backspace; it not being Caps Lock is not really a problem for coders).
The big issue is that the IBM layout is Ctrl-Alt, whereas the Lisp Machine was [too many other "bucky bits"]-Meta-Ctrl. Since I learned EMACS with the latter layout, for many years I just remapped those keys on other systems.
If you're interested, here are some pointers to relevant resources (snap4.tar.gz image works as advertised):
http://www.advogato.org/person/johnw/diary/12.html
http://www.cliki.net/VLM_on_Linux
http://libarynth.org/vlm_on_linux
The opengenera tar-ball can be found in the intertubes.
Tip: Use Ubuntu 7.04 x86-64 to save you some trouble. I don't know what is the most recent distro that works but 10.10 doesn't. You get a blank window because recent X servers are missing something. Before proceeding with the installation, complete the Ubuntu image with relevant archive server URLs to be able to apt-get with ease.