I've always considered the Debian model of support (freeze the world and try to support everything) to be wrong and swimming against the tide. I believe that the only ones capable of supporting a package of a given complexity are the authors themselves, and the distros should just handle packaging (and minimal patching on top if necessary). If the author drops support, then you should either be extremely confident you have an adequately strong team to fork and maintain the package...or just don't.
In my humble opinion, the FreeBSD ports model is better in that regard. That's also why I try to use pkgsrc for various packages when maintaining systems running LTS Linux distros.
The reason for the "freeze version and backport security fixes only" is that users need to be absolutely confident that when they ask for security updates, nothing else will mysteriously break.
That's certainly a risk, but it's less likely to happen than the alternative and all serious distributions have a QA process in place which is designed to catch those issues.
Reason why authors can't support packages is that their views may not represent one of Debian and this is why there external maintainers. E.g we have open source game engine and have launcher to download mods and Debian privacy policies don't allow it to check mod updates by default.
Also there is authors that don't care to support ancient library versions that present in all distributions or simply don't care about anything except static linking. Some people also don't use Linux at all even if their project does support Linux.
We have reasons to keep it enabled by default so package maintainer will need to disable it by default anyway. Personally I wish to eventually provide better option to everyone, but that launcher is so unimportant that it's unlikely I'll have time on it in foreseeable future. So I think community of debian provide good option for people that need it.
Also it's of course not a case for our project, but I suppose there is developers who simply don't care about things like privacy or security as much as debian community do and I see no reason why they must spend their time on that instead of actual development.
You could make it a build-time choice and prevent building if the choice isn't made. Then Debian could choose to maintain user privacy and you could choose to violate user privacy.
This is more or less what going to happen in future as we don't have any problems with merging such options. Though real problem still remain: debian need own packages because we simply have different views on how software must behave by default.
This is a real problem. It is a total joke to suggest that people move important systems to Archlinux though. Basically as an administrator you need to be aware of security advisories against the OS you run. This is already true because you need to know when to upgrade packages. If you are running Debian you should sign up for debian-security-announce:
There are equivalent mailing lists for CentOS. It would probably be helpful to have better tooling to warn about/mark packages with known vulnerabilities associated.
I'm not an expert on Debian's internal process but sometimes it seems that Debian add packages to their distribution without a clear plan for how to fix security issues in them. Sometimes upstream maintainers seem hellbent on making it impossible to offer long term support for software. Elasticsearch is a case in point:
I've been looking at various descriptions of distro security policies lately and I'm actually not really decided on what the "right way" is. I agree that if you're an sole operator, you need to monitor the upstream project, disclosure lists, and assigned CVEs. If you're in a larger company, you probably need a team to handle that.
And on one hand side this looks like it makes the issue of the distro choice moot. If you keep on top of announcements, you can either upgrade packages, or port back the patch yourself, or just repackage the latest version. But having the right distribution makes things simpler - you get the benefit of someone backporting the patch earlier and doing a coordinated release. Or maybe even you get a pre-disclosure notification. One way or another, it makes things simpler to rely on someone else's work even if it's not guaranteed that it's always available and complete.
But on the other hand, getting a rolling distro like Arch you get new versions by default. This means your backport is going to be way easier if you have to do it yourself. For example you have to only go 2.3.4->2.3.5, not backport from 2.3.5 to 1.2.3 because that's what's in stable. Arch/Gentoo also had the benefit of releasing grsec kernels which considerably limits the scope of kernel exploits (and userland too to some extent), but Debian joined them lately.
I guess if I had to admin a small service on 5 of so servers full time these days, I could be temped to actually go with some rolling release distro. At larger scale I'd prefer the benefit of using the upstream work. At not-full time job I'd still go with Debian/Ubuntu/CentOS to rely on the automation of common cases.
About Arch itself - nothing much. It's just in the same bag as Gentoo regarding release process. Rolling rather than periodic release means that whenever upstream project releases a new package, it's either automatically monitored or someone clicks "out of date" on the package site (like https://www.archlinux.org/packages/core/x86_64/bash/). Then you get the new version in hours or days, rather than when the next OS release comes out.
Taking GCC for example, that means Arch users are at 5.3.0 now, while Debian is still on 4.9.2.
For security patches though that means that on Arch you have to upgrade to the latest version, while Debian will likely give you the same version you're already running with the issue fixed. This may be annoying on Arch if you're patching PHP for example - it may not work anymore with the site you're running and you have to migrate to the latest version.
> It's just in the same bag as Gentoo regarding release process.
There's a big difference, though - Arch Linux has no concept of partial upgrades. You have to update, or things break. Gentoo allows you to stick with old versions and has a pretty sophisticated mechanism for that ("slots"). It's also complicated, which is one of the reasons Arch Linux is easier to use.
This makes Arch Linux unsuitable for pretty much any "production" scenario. Puppet version updated to 4? Tough luck if you didn't scramble and update all of your code since version 4 has backwards-incompatible changes.
Works pretty well for desktops - if something breaks, you can usually just fix it. But it does not scale.
Gentoo, on the other hand, has an unstable and a "stable" channel and you can use it for production systems if you've got the necessary manpower to essentially build your own distro.
I'm running Arch Linux in production for my private systems, and am currently extending it to a system previously on Debian oldstable in order to harmonize my processes.
It works very well for me (I didn't have any update-induced downtimes yet, except for the usual reboots to apply kernel upgrades), but I can easily see how people could struggle with the no-partial-upgrade problem when they employ complex, fast-moving software stacks.
My solution is to package that software myself in a private repo, so that I get to decide when to upgrade. In Arch, probably more than with any other binary distribution, it helps very much to have a good grip on the packaging process. Once you already have your private repo and signing key set up and rolled out everywhere, the barrier to rolling the next custom package is much slower.
That's in testing though. Arch has 5.3.0 in core already. Not sure where the .1 comes from since it's not even released yet according to https://gcc.gnu.org/releases.html
Edit: Actually it looks like the versions are just named differently. Debian uses SVN version from 20151205, while Arch uses 20160209. So Arch is more fresh (by 2 months), but uses different naming convention.
Arch is based on "rolling" releases where they very quickly promote whatever the original author has released into their distro. I think there are some checks but it is quite a light process. You are a lot closer to the upstream developers of the software you use, but you have to take on the responsibility for stability.
A lot of the Debian process is trying to avoid releasing packages with (known) bugs
Some distro rolling releases get upgraded every 6 hours, are you sure you want to potentially upgrade that often?
If I were running a larger company I would assign my security team to help out a distro security team, so I benefit from the work of others as well as getting specific support for the software I really care about.
You don't have to upgrade every 6 hours. The upstream repo may be constantly updated, but the smaller your scope is, the less you need to care about really. For example I expect that I'll have some new packages on my desktop every few hours, but on a server with system+service+monitoring it may be one package a week instead. At that pace, it's trivial to just get notifications and classify updates as critical/useful/whatever.
If you have a proper test/deploy pipeline you can retest with new software all as it's released, but only push to production every week, or when it's critical.
Lazyweb question. Is there a way for me to easily keep track of important upstream security announcements for some given set of software—say, ElasticSearch, Rails, nginx, and Linux? Like I could check those check boxes and get an RSS feed, or something?
We're working something similar at Patchwork (https://patchworksecurity.com/) . You tell us what packages are installed and we notify you when a new security update is released. Right now we're focusing on distro package managers, but plan to add support for monitoring upstreams like nginx.
The big issue we see with RSS / mailing lists is a problem of discoverability and noise. Not all software has an easy to digest format for security changes. We want to do the scraping / parsing once and make it consumable by others. General lists such as OSS and distro specific security announce lists tend to have more noise. Most people only care about packages installed on their machines, which is why we filter our results based on your set of packages.
Related to open source, you probably want to track OSS Security mailing list [1]; or if you're part of an organization relying on open source software, you should have someone looking at that list.
I think it is a good complement to the security advisories of your Linux distributor (Debian, CentOS, whatever), but is isn't free as you'll need time to keep an eye on it.
Thanks. Regarding cost... someone should sell curated security feeds to startups.
There must be many dozens of engineers who monitor mailing lists for issues in (let's say) ElasticSearch. They could be incentivized to share their findings.
Then if one source tags a certain CVE as significant, I get a ping. If several sources tag it, the ping gets more urgent. Eventually I feel scared enough that I upgrade.
That isn't really possible, Linux for example doesn't even have a security issues page, you are expected to read git patches yourself and understand the implications of each patch.
In general, looking at the security information for a Linux/BSD distro containing the software you care about is probably the best way, along with tracking any RSS feeds you can find that do exist.
Except on Arch everyone is supposed to keep updating their system at least once a day/on boot. Making security lists irrelevant as long as the packages are patched and released to the repos.
One of the reasons why you'll see most Arch users and developers recommend against using Arch for a server due to the increased maintenance burden, which on a desktop is minimal.
I wouldn't say "once a day". When I'm sick, I might not touch my PCs for a week, and that has always been fine.
The only situation where things were fucked up badly was when the system upgrade failed on a friend's notebook that was not updated for 6 months. I had to uninstall KDE in order to untangle the missing dependencies, then do the upgrade, then reinstall KDE.
For us, lazy admins, there's cron-apt in Debian based distributions. I believe, that there's something like cron-apt available in more distributions also.
What you call "redneck" version is actually how I manage my system configuration. :)
The metapackage is the only one that's explicitly installed, ever. (Except for workstations where I might install applications to test them.) Part of system maintenance is to clean out orphaned packages every once in a while, usually as part of the system update.
> But can you really be sure that people that do this stuff as an hobby can deliver this in the quality that you expect and require? Let’s be honest here, probably not.
This must be the most short sighted description of Debian developers I have to see, cause 1) most do packaging as part of their paid job 2) apparently millions of people in the world believe that, yes, these people do a fine job at maintaining those packages. Not to mention those who develop the software they package or those who do academic research about software packaging and dependency resolution. But insulting Debian has become commonplace nowadays.
The author also seriously recommends Arch Linux and Tumbleweed as alternatives. Anyone who ever maintained a server in an enterprise production environment knows how valuable the kind of stability that the author criticisms is. Not everyone lives in a DevOps environment where you can "move fast and break things".
He does have a point about unmaintained packages, though. Debian probably shouldn't package things if they cannot support it over the lifetime of the release. Why would I ever want to use a two-years-old Wordpress version? I never understood why those kinds of applications are even packaged at all.
Most sysadmins deal with this by including a third party repository for applications and Elasticsearch (any many others), often by the authors themselves, which takes care of security and version updates. This eliminates most of the concerns. You get a rock-solid operating system and an up to date application on top of it. Web applications like Wordpress or Owncloud aren't usually deployed using distro packages at all.
Countering one of your points, and yet bolstering your statement here. Even in a DevOps environment, packaging is important. Same as a system image or random artifact, a distribution package can provide consistency and repeatability.
If I had to wait for a rebuild of the entire operating system to test deployment of a single application, how much confidence could I possibly gain in that application's deployment process?
Also... it seems the author is suggesting that newer software always has fewer bugs. Hmm... .
I also think this really shows the disconnect in mentality between the younger developers and the 'older' ones (I quote this because while I am 25 I often find myself in the latter category).
Software used to be a thing you maintained, if you made a major release you would continue to support the old one for a period until users (in the case of libraries, developers are your users) had a chance to migrate off of it. This meant releasing security updates and bugfixes for the old release in parallel to the new one - or at least making it not impossible for these changes to be backported.
Now everyone wants to be evergreen - you're still using 1.2.x of my library that has glaring security issues that have been fixed in 2.x? Well, better upgrade, because I'm only supporting the newest release.
As someone who has to constantly deal with the headache that is NuGet for their day-job, I absolutely loathe this mentality and the idea that everyone should just use their language package manager. If any of the NuGet packages we use internally has some glaring security issue, I've got 20 applications to update, test and redeploy on a good day if there's no breaking changes - at least if for whatever reason the Fedora packagers can't backport a security fix for a python package I depend on it's easy enough to push a new version out in my local yum repository and have it automatically fixed for every application that uses it.
It's gotten too onerous to support old software. There used to be one Linux, one BSD, and a couple of Windows / DOS. Mac was probably around too, I just never interacted with it in the 90s / early 2000s.
Backporting security fixes and replicating interconnected bugs across all the combinations of setups is impossible. Just getting the latest version fixed in all is onerous.
Keep current is my philosophy. It's why I like the Ruby community, the global namespace makes it impossible for libraries to stay behind too long.
Even then, who cares. Software developers didn't have to explicitly support BSD, if there was a BSD user who cared enough they'd work on getting your package in the ports tree. If there was a Debian or Red Hat user that cared enough they'd make a .deb or .rpm. None of this has changed, what HAS changed is library and application developers deciding to say "screw people using existing releases, I'm only putting bugfixes and security updates in the newest release so anyone still using 1.x and caring about stability can sod off".
I package software I care about, but libraries in particular are notorious for not maintaining any semblance of ABI or API compatibility even between minor releases these days. It's increasingly difficult to support this when I have to rebuild or update multiple applications that depend on a library because they decided to make huge breaking changes in a point release or they won't provide any bugfixes for the major release I'm still on.
Yeah but everything is online right now and we're moving more and more data that _must_ be secured. Back then you really only cared about a couple of platforms - nobody cared if you could hack some random dudes garage or phone.
Nobody is asking you to test your software on a million different platforms, just don't abandon your 1.x users the moment you release 2.x. I am not talking about explicitly supporting XYZ distribution, because package maintainers and software developers are two different groups of people.
Well, you have to chose something. You can't have security, automated updates to latest version and stability in the same place. You're either in debian/RHEL world with old kernels, old libraries and old userspace tools which miss a lot of fresh features or in npm/pip/curl|bash world, where you have latest version of everything all the time.
In former case you can do `apt-get upgrade`/`yum update` and be almost sure that everything will continue working, but - no - you can't have PHP 7.
In latter case you either use npm shrinkwrap-like tools to install the exact same version of everything every single time, or play Russian roulette with the new dependency versions. And - just in case if you didn't notice - when you pin some package to a specific version you no longer receive security upgrades for it. And let's be honest - you have a lots of those "^1.0.1", "~0.10.29", "^0.3.1" things in you package.json/Berksfile/... And for almost any package "^0.3.1" is the same as "0.3.1", cause the next version will obviously be "1.0.0" and 0.3.X won't be receiving any more updates.
It's obvious that no single distribution will be able to package the insanely large amount of packages from all the different sources, let alone backporting patches. So you either limit yourself to only the stuff available in your distribution, or you're on your own with updates (including security ones).
As for the packages updating themselves, sometimes it's a good thing, sometimes it isn't. I bet a wordpress installation which can't overwrite itself (because is owned by root), and doesn't allow executing user-uploaded .php files will be much more secure than one which has full access to itself.
P.S. no amount of tooling can solve this problem. If you're using version X of package A, then you find out that there is a security vulnerability in version X which is fixed in version Y and version Y is not fully compatible with version X (changed an API, config file, anything else in a backwards-incompatible way), you're semi-screwed. You will have to handle that situation manually.
I personally find that Debian stable for most packages, and then vendor/Packager repos for specific software (eg percona, varnish, dotdeb, nodesource, etc) works very well.
The best way to solve that is probably creating small, task specific distributions.
If you compile, for example, a distro with just the core and web server packages of Debian, it'll be small enough to check for updates that break it, so versions can roll much more often.
This seems to less a complaint about packages and more a debate over Rolling vs Non-Rolling Distro's
The Author seems to take the opinion that Rolling is always better for Security because he selected a few "web" packages and found vulnerabilities in the shipping versions.
Ofcourse for the people running mission critical enterprise applications they probally are not running phpmyadmin on that server so they could care less if the repo for their stable version of debian contains that older software.
I agree it is problem, but the solution to that problem can not simply be rolling all the time.
I use Arch on my Desktop system, but I would never run my employers mission critical database on Arch... I need stability and security, not simply security.
Ultimately I think we need a better way to define the Core OS, vs "User Applications"
Things like glibc, the kernel, etc are clearly coreOS, things like phpmyadmin are User Applications. Where it get grey is databases, webserver etc. Do you lable them a Core product or a User App, if I was running a mission critical applications I would not want my mariaDB system just upgraded to the lastest version with new features and possible breaking (removing) older features that my app may still need.
Enterprises move slow, I still have enterprise applications that require Java 6, I have some things that still have software that only run on Windows 2000. This idea that Rolling to the latest version of software all the time is a workable plan highlights the authors ignorance of how enterprises actually work.
The FreeBSD folks have made a pretty good separation between "base" (what you call Core OS) and "ports" (what you call user applications). It works wonderfully.
This isn't a new observation, and I'm kind of surprised it hasn't made it into mainline linux packaging philosophy yet.
My knowlege of FreeBSD is limited, but from what I understand this still does not resolve the issue.
While they have seperated the base OS, they have not resolve the issues of applications that can be mission critical, thus warrant to be frozen like the base os is. Things like apache, mysql, etc etc etc.
If you treat them like user apps then they will be updated when maybe I did not want them to be, but if you treat them like base os then you have the problem the author outlines.
Fruther my understanding is the ports system only supports the lastest version of FreeBSD, meaning if you want to use ports you always have to update to the latest version when released.
I have CentOS and RHEL Servers running version 5, if it were FreeBSD I would not be able to use this 2 release old version of FreeBSD and ports, I would have to upgrade.
That's well beyond the remit of the OS developers who are only responsible for FreeBSD, but the ports team has happily provided us with `pkg lock` to prevent edits to ports once they are installed.
>> my understanding is the ports system only supports the lastest version of FreeBSD
I believe the the ports tree will be compiled for a given branch until the underlying branch is EOL. FreeBSD 8, for example, had binary packages built from 2009(?) until the branch was EOL'd last year. That's six years of support for 3rd party software, after which you can maintain/build your own pkg repository.
I understand the desire for longer environment lifecycles in FreeBSD (I hate the continuous upgrade cycle myself), but without a company stepping up and providing long term commercial support, I'm not seeing how a volunteer project as small as FreeBSD can do more for longer.
>>That's well beyond the remit of the OS developers who are only responsible for FreeBSD,
That is exactly the point the author is making by declaring that "packages are not secure". they are not maintained by the OS, but are distributed as "official" thus giving some users the illusion they are supported or "in the remit" of the OS Developers
Enterprises also tend to buy products with well defined supported platforms. It's fine to say "Debian version x". Who's going to advertise their product as "supported with Arch, whatever updates are going" ?
> Did you know that for example Wordpress comes with automatic updates
I grant that Debian is not great at webapps. OTOH giving an application write access to itself is inherently risky. It's too bad shared webhosting and its limitations has so warped the PHP application community.
An alternative model: web app has r-x on the app files, and an app specific admin user has rwx to run the check and update script on a regular basis.
WP has the ability to use (S)FTP to update the files rather than direct file access.
That said, it's intended for the long-tail of sites. If you know how to run an update script, you're already more advanced than a fairly big chunk of the WP userbase, so you probably should be doing updates that way. Many professionally developed sites will be using source control, so autoupdates are disabled anyway.
I think that labeling point release distribution package managers as insecure because they are sub-optimal for a certain use case (updating web applications) seems a tad too excessive.
Having software go through a decent distribution's packaging process not only provides stability guarantees, but also off-loads many tasks that you would otherwise have to preform on a per-project/per-developer basis.
The author makes some important points, but there is a cruel irony: He's a main developer of owncloud, which in terms of security updates is a huge problem.
Owncloud has its own update mechanism, which unfortunately usually doesn't work in the real world (it breaks if you have any reasonable timeout for your PHP applications, which every normal webhoster has). There are likely countless owncloud installations with known security issues that their users tried to update, but couldn't. (The alternative is a manual update on the command line, but given the target audience of owncloud it's safe to assume that many of its users aren't capable of doing that.)
It is true that in the stable branch there are dated version numbers, probably most of the time for a good reason (e.g. long term support).
On the other side I adhere to Slackware's vanilla philosophy and with slackbuilds you can have always fresh and up to date software.
Well, if you do this you should have dev, test, prod chain for you servers, otherwise you update at your own risk.
Last upgrade I went for Ubuntu12/Node0.10 to Ubuntu14/Node4 but also nginx changed and even logrotate from 3.7.8 to 3.8.7 introduced few modifications that broke my configuration files.
Upgrade is not that easy.
I would like to share a project of mine that brings vanilla philosophy on every distri. You have a script to build your software and installing it locally, so for example you have version z.y.x installed oj your system and you want to install z.y.(x+1) released yesterday.
Normally you download the tarball, bla bla bla and launch make install, most of the time you follow really similar steps, you can put in a script and launch
.software_install Foo
if the version is the same as in the README.md or even
.software_install Foo 1.2.3
to install a specific version. It is really was to add new software to the list. You can also package your software to avoid compile time on other hosts (test and prod). Give it a try, I think it can be useful to many system administrators and developers:
The author brought evidence -- CVEs that aren't getting fixed in Debian repos. What's your counter-evidence? Letting software keep up with upstream is as insecure as upstream, no more or less. What's the evidence that pinning upstream to some older version and backporting some (not all) security patches and bug fixes is more secure?
The author brought evidence -- CVEs that aren't getting fixed in Debian repos. What's your counter-evidence?
Just poking around for two minutes finds counter-evidence. Let's take the two critical Wordpress vulnerabilities. He uses an archive.org page of the Wordpress package from February 7. If he actually looked in the Changelog of the Wordpress package for e.g. Jessie, he would have seen that these issues were fixed the day before in stable and two days before in sid:
So, I guess that it depends on your definition of "aren't getting fixed". But the way the author writes it, it seems that issues are lingering from weeks/months, which does not seem to be true.
The archive.org links were used at the creation of the blog post. (which was not necessarily the release date ;-) - needed to find some time to write it)
So Wordpress is an interesting example. Because the CVE assignment date has nothing to do with the release date of the patches. Wordpress doesn't request the CVE on their own.
So we're still at a 4-5 day delay (https://wordpress.org/news/2016/02/wordpress-4-4-2-security-...) for security fixes for a web-facing software. This is still far worse than just enabling automatic updates in Wordpress. I have not much problems if that would be a locally exploitable vuln, but web software usually is exploitable via web.
When it comes to web software I believe it's unacceptable to add any additional delay. (sure those bugs were not that severe, but as other examples in the blog show the problem with delayed or never updated packages is inherent)
> This is still far worse than just enabling automatic updates in Wordpress.
A webapp updating itself, having write access to its files (I see zero privilege separation in [1]) and getting updates from a hopefully not compromised source (see Linux Mint lately), that's just asking for trouble. I trust the Debian mirror infrastructure with signed packages that are updated by a privileged system user way more.
The archive.org links were used at the creation of the blog post.
I still find it disingenuous to use an archive.org link dated February 7 on February 13.
So we're still at a 4-5 day delay for security fixes for a web-facing software.
I agree that this is (far) too long. I just dislike the sensational tone of your blog post and subtle bending of facts. Linking to the Wordpress 4.4.2 release page and the Debian changelog would have been factual and convincing. Pointing to a week-old webpage, which was outdated on even the original date is just sloppy or manipulative.
Debian isn't the only distribution that exists. Sometimes you need a distribution to backport patches (read: enterprise customers that fear new features like the plague).
I think Debian's long release cycles don't make much sense in this day and age. To me, a rolling release model makes much more sense, especially in this world where security updates are being done constantly, and are generally focused on the more modern branches.
For enterprises where software is part of the core business, keeping up with updates on a regular basis is probably better than doing large upgrades every once in a while (for one thing, tracking down regressions is a lot easier when you don't have to search the entire haystack).
By using relase designation instead of toy codenames ("testing" instead of "Stretch") in apt config files I get pretty close to rolling realease. Using that and apt-pinning I am totally happy running stable/testing/unstable/experimental system with majority of packages from testing on my desktop. I am rather conservative and would avoid anything bleeding-edge on production systems so using stable with security updates (and stable-backports if needed) would be my choice. YMMV though.
Just want to emphasize (and this is not directed specifically at you): you almost certainly shouldn't run Debian testing on anything that is public-facing. Packages get migrated to testing after some days in unstable if no high-priority bugs are filed against them during the days after their upload.
If a security patch is uploaded to unstable today, you won't get it in testing for a few days, and possibly many more if the migration gets blocked.
The conclusion I take from this is that distros need to be a lot more selective in what they package. If packagers can't reliably backport security fixes for the several years that a distro release is supported, they shouldn't create that expectation by putting the package in.
OpenBSD purged many webapps from its ports tree one or two releaeses ago for this reason (too many security problems to keep up with). Most such packages didn't do much else than extracting .php files to somewhere in /var/www anyway. Users of such apps are now expected to do this themselves and keep track of updates.
A bit meta but articles that use the passive voice such as "Blah considered insecure" annoy me.
Considered insecure by whom? The author? The why not just say "Distribution packages are insecure". Sure it makes it sound like there is some consensus here but that does not actually seem to be the case.
Well, I can agree with the concerns. But what are the alternatives? What GNU/Linux distributions do (and Debian in particular) looks to me the least of all possible evils.
One thing to note is that RHEL comes with far fewer packages than Debian (e.g. Wordpress and phpmyadmin don't seem to be in the main repository), so you may have to rely on slower third-party repositories.
When you actually have a subscription for RHEL, you can also get email notifications whenever security fixes, bug fixes, and/or feature enhancements are released for packages that are installed on your servers. There is some configuration you can do to pick what events you get notified for (like bug fixes only).
And if you find yourself managing an enterprise environment, Red Hat Satellite (basically on-premise RHN + promotion/environment/configuration management) tracks what errata are applied to or required by each system and can report on what the gaps are.
I work for Red Hat, and making it easy to support Linux is definitely our bread and butter.
And if you're like me and too cheap to pay for a Red Hat subscription, the CentOS-announce mailing list also announces all the Red Hat errata - you should absolutely be subscribed to this list if you run CentOS in production.
I agree that there's a distinction to be made between core/base and user-applications/ports (as mentioned elsewhere in this thread)...
Ultimately it's all softare, and the only distinction is fuzzy: e.g. the kernel won't easily break backwards compatibility, while databases, interpreters, etc. will... but it's not something you can easily measure without being vigilant for every change.
I think that an important distinction (at least for Ubuntu) are the main and universe repositories: I'd expect these problems to happen in universe and to be mostly absent in main.
From this point of view, a good choice would be to completely rely on main, and to weights pro and cons when deciding if using the repositories to manage your user applications/libraries/dependencies for your actual service. (I'd probably define all of them with Nix, but that's not a panacea)
The problem is that even main is guaranteed to keep up with all the security updates: In some cases updates aren't prepared and shipped because the default configuration is not vulnerable (but obviously a sysadmin could change that) and it's not worth the effort.. or just like in the Python2.7.9 case: A security update most often can be applied as a standalone patch, but if the changes are overarching and not easily distilled into a patch, the update will be too expensive/risky and won't be done.
Oh yes, this is a big problem. I still run into things like installing ownCloud client on Ubuntu and then wondering why it doesn't work (it's years old). Or recently I found out that Python3-Pandas on the Raspberry pi is version 14.something which has annoyances that have long since been solved. If you install Drush (Drupal Shell) from Ubuntu 14.04, you get version 5.10! (They are at 8.1).
Arch Linux is already much better but one breaks things occasionally and they were kicked off of Digital Ocean, sadly.
I really hope Ubuntu Snappy packages, which is essentially the same as the Arch User Repository but more secure If I understand correctly, will solve this messy misery.
My understanding was that snappy packages can be descriptions that point to i.e. Github to get latest versions.
Hmm, never heard of Ubuntu rolling and it is not at vpn providers I know. But I will check it out!
The fact that we haven't solved this yet is puzzling. It seems that we have actually gone backwards in some ways over the last decade. With languages implementing several broken distribution packaging systems trying to compete against each other. Not even to mention the horrible practice of piping random sh1t from the web with curl/wget into a shell.
It would be more or less trivial to implement a wget wrapper that downloads the .sig and validates it. Hardly anyone these days understands or cares which seems to be the bigger issue (not a technical problem but a people's problem as Gerald M. Weinberg would say)
The job of the distributions would be much easier if more software projects would a) consequently use semantic versioning (MAJOR.MINOR.PATCH, see http://semver.org for details) and b) explicitly and officially designate that they no longer support the version MAJOR.MINOR branch.
The latter should be a signal for the distribution to upgrade to a newer and supported upstream version instead of (halfheartedly) trying to support the software themselves.
Freezing the work does not work particularly with certain packages. For example, I still see lots of usage of old versions of openssl (0.9.8) and openssh.
If we even just make sure to cover these two packages and the related dependencies/affected applications alone, that would go a long way in covering attack vectors (obviously not comprehensive)
In my humble opinion, the FreeBSD ports model is better in that regard. That's also why I try to use pkgsrc for various packages when maintaining systems running LTS Linux distros.