It gives very short shrift to commercial source control systems generally. Two (VSS, and SCCS --- the latter part of the original AT&T Unix distributions, on the same closed-source terms) get mentioned in passing, but neither gets treated as a milestone.
That would include BitKeeper --- copies were available for a few years at no cost to Linux kernel developers, but only on increasingly restrictive terms, which McVoy ultimately wound up revoking altogether. IIRC, Linus released the first embryonic version of git within weeks after McVoy withdrew the free-as-in-beer version of BitKeeper.
In terms of historical interest, it's also worth noting why McVoy restricted and eventually withdrew the BitKeeper license: Andrew Tridgell (of Samba and rsync) had begun reverse-engineering the protocol with the eventual intent of implementing a free client.
The resulting debacle was fairly ridiculous; Linux chastising Andrew Tridgell, continued flamewars about using a closed-source product.
What I find interesting here, though, is just how much hot water McVoy landed in. He gave away free licenses to Linux developers, then when someone in that community started reverse engineering his product with the intent to replace it, he revoked the free license, leading Linus to develop a replacement anyway -- one that has largely consumed the vast majority of BitKeeper's target market.
Tridge's reverse engineering was why McVoy withdrew the license, but he'd started to restrict it well before that. Initial releases were source-available, and had a "fail-safe" that would turn the code fully open source if McVoy's company ceased to function. Source access and the failsafe were withdrawn over time, and various restrictive clauses were added. (Most notably, around 2002, McVoy added a "non-compete" clause which purposted to restrict any user for gratis BK from working on a competing SCM for a full year after they last touched BK. I'm not sure that was ever tested in court.) Here's a brief description of the history:
Andrew Tridgell recreated the "reverse engineering" you mentioned in front of a live audience at linux.conf.au not long afterward; see https://lwn.net/Articles/132938/ . Summary: he saw a port number in the standard bitkeeper URLs, tried telnetting to it, tried typing "help" which listed the available commands, and tried typing the "clone" command which spit SCCS files back at him. He walked through the process and literally had the audience shouting the appropriate commands at him the whole way through, demonstrating the obviousness of the process.
Well if I recall correctly his reverse "engineering" consisted of figuring out that the protocol was a plain-text English based protocol (i.e it would be like if you were to reverse engineer FTP by looking at the TCP stream).
Misses the absolutely massive Clearcase. In 1999, (and earlier, but I ran into in 1999 at Loudcloud) - if you wanted to support multiple branches and allow merging code into them, it was the only tool that made it easy. Had great (windows) client side environment that gave everyone a "view" into the source, But _man_ was the backend ugly.
ClearCase is certainly popular, but I don't think it ever had a feature that was a big milestone in version control history. Client side software with a view of the source is already covered in the author's point #4, and ClearCase doesn't do this better than the others (quite the contrary, in my experience).
Clearcase had two new additions: Branching/merging made easy(ier?) and the source code under version control was seen as a file system, a drive letter on your windows system.
It also completely ignores earlier distributed source control projects like GNU Arch. 2005 was when there was an open source distributed VCS that was fast and pleasant to use, but implementations of the idea are older than that.
This article isn't meant to be a comprehensive list of all SCM systems or even all of the important ones. It's just a list of all of the big technological advancements. Perforce is a good system but it was never revolutionary.
If you ignore all the distributed "stuff", and the workflow enhancements it permits, and assume everybody is always connected to the server and attached via a LAN, you can just concentrate on doing a reasonably good job of handling very large quantities of data, including very large binary files. (Apologies for not trying to reproduce the breathless style of headline.) As is common, the article presupposes that decentralization is unambiguously progress, but that isn't true in all respects.
People often complain about the idea of using version control for large binary files, as if it is unreasonable to want such a thing, and that as a point of principle version control systems should contain only text files, and the fact that many version control systems support this poorly is proof that you don't want it anyway. But there are actually people who create, with their own hands, large binary files, often of the completely-unmergeable variety, and they deserve version control just as much as the programmers do.
(And then once you have a system that works well for them, you can then use it to solve all manner of problems that might previously have involved storing files in public folders, mailing them round, or maybe just waiting for them to compile again. No need for any of that crap any more - just check the files in, they're there forever, and you can get them back quickly.)
> As is common, the article presupposes that decentralization is unambiguously progress, but that isn't true in all respects.
It is, in the sense that a decentralized VCS is, essentially, a superset of a centralized one.
…
Blobs are certainly still an issue, though orthogonal to distribution (I don't think you intended to imply it was related, but it could be read as if you did).
Well, I don't mean to imply that binary files are inherently impossible to handle using a decentralized system. In fact, I have some PDFs and PNGs in my git repository, and git has managed not to make a mess of them. But I still think binary files are difficult for distributed systems to support well.
Distributed systems rely on allowing people to (in effect) create multiple versions of the same file, and then merge them all together later. But it's very rare that binary files are mergeable! And if the file can't be merged, the distributed approach won't work. People will step on one another's changes by accident, and people will have to redo work.
The usual solution is simply not to allow multiple versions to exist: enforce some kind of locking system, so that each editor has to commit their changes before the next one can have a go. But now you need some centralized place to store the locking information...
I never had a problem using cvs for what I considered large binaries. Certainly we kept using cvs long after mostly switching to bk because cvs worked better for binaries. Perforce never seemed like a big deal, just cvs with a little more.
Due to CVS's (or RCS's?) text-based file format and conversion of line endings when checking out a repository, migrating binary files across operating systems can cause mangling of bytes whose value equals that of CR and LF.
That would include BitKeeper --- copies were available for a few years at no cost to Linux kernel developers, but only on increasingly restrictive terms, which McVoy ultimately wound up revoking altogether. IIRC, Linus released the first embryonic version of git within weeks after McVoy withdrew the free-as-in-beer version of BitKeeper.