Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Red Hat reports security issue in Linux Kernel which was fixed 17 months prior (openwall.com)
92 points by artistsvoid on June 23, 2020 | hide | past | favorite | 74 comments


Your headline's misleading. Red Hat knows it's long been fixed in the LTS kernels, but it still needs patching in the RHEL kernels. Generally speaking, current LTS kernels are not used in older Linux distros.


Moderators can of course change the title, I don't think it's misleading and I was not trying to be. I guess 'reports' could be changed into 'finds' and it could be mentioned that it was found 'in its Linux kernel', not sure when the character limit would hit.

'current LTS kernels are not used in older Linux distros' - not sure what your point is with this. They absolutely could have used / can use LTS kernels if they wanted to or they could merge LTS-kernel changes which they apparently don't.


"Red Hat finds security issue in Linux Kernel that had been fixed upstream for 17 months"


More like "Red Hat assigned a CVE to a security issue reported 17 months ago":

https://bugzilla.redhat.com/show_bug.cgi?id=1708775

I don't see what the big news here is, it's merely identifying a vulnerability with a proper CVE ID, end of story. The issue itself does not even have a serious security impact so it will likely never be fixed in RHEL.


> Generally speaking, current LTS kernels are not used in older Linux distros

Debian always uses longterm branches. Sometimes Ubuntu, too.


Can someone in the kernel dev space give a longer explanation for this?

This looks to me like Red Hat assigning a CVE to something patched a long time ago. Is this just record keeping to label security issues with CVEs or has Red Hat left this unpatched for 17 months? (Or something else?)


Red Hat's security program includes disclosing and tracking security vulnerabilities found in its products. As part of doing this, they obtain CVEs for any vulnerabilities which did not previously have them. CVE IDs are required by US gov't and many corporate security programs, as the broader point of the CVE system is not only to provide standardized identification of vulnerabilities but also assign certain metadata (such as a severity category) which is used by many vulnerability management programs. Basically, having a CVE ID assigned makes it much easier for the vulnerability to be included in automated tools.

Because Red Hat builds on open-source software and generally lags well behind the cutting edge, usually by the time RH identifies a problem in something like the Linux kernel it has already been found, reported, and assigned a CVE ID by someone else (unless it's discovered by RH's own kernel developers, in which case they would of course be working on a fix at the same time that they start the vulnerability process).

In this somewhat unusual case, the problem was found and corrected in the Linux kernel through a typical bug-fix process and not handled as a security vulnerability, so no CVE was assigned. When RH discovered the problem they made a determination that it should be handled as a security vulnerability, and so that triggered their normal process of disclosing and requesting that a CVE ID be assigned - in this case, even though the problem had already been fixed. It's not especially significant to this that RH had not backported the fix for this particular problem, as RH would generally go through a vulnerability process anyway because of the simple fact that some RH customers have not updated their systems, and so the vulnerability needs to be reported to them.

Overall the whole thing is not particularly interesting, really just a bit of security bookkeeping, except that it does expose that RH does not necessarily integrate LTS Linux releases, which means that they will sometimes not obtain security fixes included in such releases. Most of the time such security fixes would be treated as vulnerabilities by the organization that made them and the broader Linux community which would trigger RH's internal vulnerability management process leading to the fix being rolled out to RH products. However, many Linux bugs with potential security implications are not judged by the people finding and fixing them to be severe enough to merit a security vulnerability process... for instance, virtually every memory handling problem could be viewed as a vulnerability but requesting a CVE ID etc. is wasted effort if the problem is unlikely to be exploitable. The "unlikely" can be hard to tell. Ultimately this is sometimes a judgment call and different organizations will disagree, sometimes leading to situations like this where a downstream user chooses to treat it as a vulnerability after the fact.


> In this somewhat unusual case, the problem was found and corrected in the Linux kernel through a typical bug-fix process and not handled as a security vulnerability, so no CVE was assigned.

This isn't unusual. This is actually the usual case.

The unusual thing here is actually that someone downstream noticed they were missing a fix (probably because its LTP regression test was failing, which is also unusual because most kernel fixes don't have an LTP regression test).

More commonly, no one notices and these bugs never get fixed in downstream kernels that aren't staying up to-date with LTS; and there is never a CVE, an oss-security post, a Hacker News thread, etc. But these bugs are still there.


> More commonly, no one notices and these bugs never get fixed in downstream kernels that aren't staying up to-date with LTS

Actually a lot of RHEL subsystems are updated wholesale by including all upstream patches (not just those that go into LTS kernels), and this way all such fixes are automatically included.


Great explanation.


Sounds like the latter. They "found" it and requested a CVE, but it was already fixed -- they just hadn't merged that change because they didn't notice it was security-related.


Why don't Red Hat et al use the LTS kernels?


They feel like they know better and do not want all of the fixes that the LTS kernels provide for some crazy reason.

I suggest you contact them if you rely on a RHEL kernel to ask them why they do this, it's always seemed crazy.

Note, I'm the person who does the LTS kernel releases, maybe they just don't like me :)


I'm certain that Red Hat and our kernel developers have no animosity towards you, in fact it's completely the opposite. You're well known in the community not just for this but for maintaining and writing countless drivers and loads of other great work in the kernel.

Fedora does use the LTS kernels.


> They feel like they know better and do not want all of the fixes that the LTS kernels provide for some crazy reason.

It's even crazier; they sometimes backport changes to their kernel that the LTS kernels don't get. We use a custom kernel module that contains a bunch of #if #endif blocks that check the kernel version for stuff that changed. Doesn't work on RedHat since you actually need the branch that's for more recent kernels in some places.


Isn’t the whole autoconf stuff supposed to avoid the need for if macro soup, by using feature detection?


There would be cleaner ways to achieve this, maybe not specifically autoconf since I think that's more tailored towards "normal" (user space) stuff.

Macros are convenient to quickly check the version in your code without adding another layer of tooling... Until you end up with said macro soup of course.

It's actually a legacy module we're about to phase out for 5.x so don't worry too much. The new and shiny replacement will probably use git branches for whenever something changes.


It could also be motivated by the fact that it's not entirely wise to base an entire multibillion dollar business around whatever text a non-employee happens to push to a repo you don't control.


Android doesn't seem to mind, they require the LTS updates to be taken for their devices (well, "require" is a strong word, they are pushing harder now than they were in the past, "required" will be happening in the future, hopefully...)

As the number of systems running RHEL is really just a rounding error compared to the number of Android systems out there, maybe it doesn't really matter :)


Android and RHEL are completely different scenarios.

The target of Android, which is who Google has to deal with, is multiple manufacturers creating kernels for custom hardware (often without upstream drivers), with very short product life, relatively little experience with upstream contribution and few needs for new features for a given major release of Android.

RHEL is developed by a single company with a 10 years life cycle, only 3-4 kernel versions to juggle but almost non-overlapping lifecycle (as far as the initial development-heavy phase is concerned). Development occurs upstream first and quite a few engineers are upstream developers or maintainers, so that the number of non-upstream features is very small and almost going down over time, see for example stuff like the secure boot lockdown patches that Matthew Garrett started when he was at Red Hat. And even though the product is not the kernel, we need to backport more features than what goes into LTS, because userspace needs them (user namespaces, driver updates, networking or virtualization optimizations, enablement for new processors, etc.)

So it's only natural that there are completely trade-offs to make.


I would guess that RHEL has more stringent requirements than your average me-too Android manufacturer.


But do the devices running Android generally operate with the same requirements as the servers running RHEL?


> But do the devices running Android generally operate with the same requirements as the servers running RHEL?

I guess it depends on the device and the business.

How many days of downtime on Amazon.com is the cost of bricking 1M phones? Or 100M?


Yes, that's why all the servers in the datacenter are running Android, which has a stellar security record.

This sort of myopia is yet another excellent reason for Red Hat to take their kernel process in-house.


LTS' security record is definitely not to be proven. The example we're currently commenting on this thread is only one occurence. It is highly re-occuring.

RHEL fixes only CVEs. Linus Torvalds consider there is no such thing as a security bug, that actually every bug is a security bug. So RHEL's kernel can't be secure.


Debian also uses LTS-series kernels for their stable release - with their own patches on top. They don't actively backport features like RHEL does however.


Why doesn't that argument apply to the mainline kernel?


Note that Red Hat, SUSE, and Canonical all don't use them. They maintain their own kernel trees with their modifications. These kernel teams have different priorities:

* stable ABI within a timespan (RH, SUSE)

* feature backports based on customer demand (all three)

* out of tree goop (Canonical)

I've also heard from others (though notably, not folks specifically at these companies) that longterm kernels are iffier than regular stable kernels because people come out of the woodwork to push weird stuff for longterm kernels, which destabilizes things more than with regular stable kernels.


Seems like even Arch maintains it's own kernel. Not uncommon for distros to do it at all.


Wrong, or at the least misleading.

What do you mean by 'maintain' here? Because Arch is trying to stay on the bleeding edge side of things in a reasonable manner and is rolling release, they almost stay 1:1 with upstream, "maintains its own kernel" makes it sound like they would do some heavy patching and do active maintenance with certain kernel versions, et cetera. You can have a look yourself if you want to https://git.archlinux.org/linux.git/commit/?h=v5.7.5-arch1&i...


They have their own kernel as can be seen here:

https://i.imgur.com/jP6Pdvq.png

Also if you look at the diffstat you can see that there were a number of changes here:

https://i.imgur.com/J2em78v.png

It may be align with the majority of the kernel, but these patches exist in every version of the kernel that is released by Arch.

What else could that be called aside from maintaining your own kernel?

These changes are extremely small within scope which is the nature of Arch, but they still exist. A more hands-on distro will see many more changes than Arch, naturally. However, as evidenced from the above you can see that Arch does have it's own kernel and I cited Arch as an example because it strives so hard to maintain 1:1 with upstream.

Btw, I use Arch.


"maintaining your own kernel" in this context means that you're bringing bugfixes (including security) independently from the mainline Linux kernel, which would usually involve cherry-picking commits from the mainline.

Archlinux doesn't do that, and straight merges Linux mainline.


The RHEL8 kernel has a few tens of thousands of commits on top of 4.18. That's a bit more than Arch. :)


Because release schedule doesn't match to LTS release schedule? Distributors may want to ship OS with latest kernel than latest LTS.


What is the difference between a "longterm kernel" and a "regular stable kernel"?


"regular stable kernel" lives only about 3-4 months, just long enough for the next release from Linus to feel "good enough".

"longterm kernel" lives for 2+ years. I pick one each year (usually the last one released in a year) for that.

See the releases page on kernel.org for details on what the longterm kernels are, and for how long they are being maintained and by whom.


Not in my experience. LTS never seems to break any compatibility, even internal kernel structs, function names, etc.


Actually LTS does break ABI, but gregkh is working with Android to make a partially ABI-stable LTS (I assume it would be a fork of the real LTS though?).

I know they are also working with RH and Suse, but I don't know if RH and Suse will only share the tools, or will also have the same LTS source tree.

When I say "partially-ABI" stable, their goal is to list acceptable symbols and automatically verify that vendor modules use those symbols, and only those accceptable symbols are ABI-stable.


That "partially-ABI stable" is the same exact thing that Red Hat and SUSE and Debian have been doing for 20+ years now. Nothing major and exciting there, but see the presentations at the Linux Plumbers conferences for details on the tools being used if people are curious (hint this time everyone is working together on the same set of tools...)


RHEL maintains a KABI to support binary device drivers. Given how frequently the linux kernel changes internal APIs, this would be much harder or even impossible if they changed kernel versions regularly.

Instead, they take features and bugfixes from upstream and merge them back into their kernel version, auditing them for changes that would not be binary compatible.


Red Hat sometimes also backports new features, not just security fixes. In the first years of RHEL life cycle new features and certain upgrades are usually added to the point releases. This is actually nicer than being locked into same feature kernel for 10 years...for those that can't afford the changes, you have the Extended Support for each point release which then gets only the security fixes.


Ubuntu does the same thing. I'm only pointing it out because if you can't find a reason RH does it, you might be able to find the reason Canonical does it, though.


Probably drivers.


What about them ? I don't think the OP means using the upstream kernel directly, just why they don't use the same version as the LTS version.


Reasons I can think of include:

Not wanting to be tied to a kernel LTS release cycle you don't control.

The need to backport features and hardware enablement to the LTS kernel significantly offsets some of the benefits of using a LTS kernel.

The length of support for LTS kernels is significantly shorter than Redhat's length of support which means they would end up supporting it on their own for a significant timeframe anyways.


I really hate the fact that RHEL updates kernels so slowly. This becomes very painful when your product cannot take advantage of some advanced kernel features just because you have customers running RHEL :(.


It can be frustrating, but it's almost always worth it in a serious production environment.

I have my own anecdote: I have serveral home servers running CentOS 7 (set up nearly 6 years ago). I update all the time and I've had zero issues. Literally nothing has ever broken. My previous servers ran CentOS 6, and were equally reliable.

My desktop runs Arch Linux, and gets ever kernel dot release. I've had an unbootable system at least 5 times over the last 5 years, requiring me to boot from a flash drive to downgrade a package, or edit a grub config. As you can imagine, it's bitten me at the worst possible time (when I really don't have time to troubleshoot).

I know some people advocate for Arch on the server, but to me that seems crazy. Nothing against Arch. It's great for a desktop, and I knowingly trade off some reliability/stability for the latest stuff. I wouldn't make that trade on a server that hosted important stuff.


Fedora seems like a sensible middle ground.


I second this. I was running antergos on my desktop for a while and it is a fantastic desktop OS, if you have time to deal with the inevitable boot failures.


I would recommend keeping an lts kernel around when running arch. It’s not much harder than pacman-S linux-lts.


That probably helps, though many of my issues have been other problems, like xorg, the nvidia driver, grub (or now systemd-boot)


If it's any consolation, I'm a software engineer working for Red Hat on the userspace part of one of our most bleeding edge products, and it's a pain for us too :). And it's a much bigger pain for our kernel guys

It would be way cheaper and easier for us to ship a newer kernel, we do it because it allows RHEL to have significantly less regressions than upstream kernels (and I'm talking about what the kernel engineers say, not some marketing bullshit).


Yes (kernel engineer here), we can mix and match what we pick and when. Almost every major subsystem is updated only a few months after the corresponding patches hit Linus's tree.

I would guess that most of RHEL 7 is on par with 4.14 or 4.19 (which is newer than the RHEL 8 kernel!) and RHEL8.3 will have subsystems updated up to 5.7-5.8.

Of course it's not much fun backporting 50 patches for Intel vulnerability mitigations to RHEL7.2 that some mission critical apps still use. But I guess it pays my bills so who am I to complain, I love everything else about my job. ;)


If Red Hat (and any of the other major distros) didn't do it, someone else would as a service you could pay for. That lack of updating is exactly why they are used.


Try SLES; enterprise support, recent kernels.


Is there anyway you can use a different kernel? The kernel is really easily replaceable in other distros. I don't know much about RHEL.


No, you can't do that in RHEL. It would invalidate support.


Seems sad that the modularity of Linux is not allowed due to issues with support. That, in my opinion, is the best part of Linux - which is finding solutions to problems that exists outside of the scope of what is designed by the vendors.


It's a tradeoff. There's nothing to stop you from taking a RHEL system and installing the very latest kernel, and there's a good chance that the resulting system would work fine. However, RH the company only supports their own versions, which seems reasonable. Also, and not unrelated, I believe RHEL kernels hold a stable ABI to the point that you can use vendor-provided (out-of-tree) drivers and not have them break on every release, which is something that normal kernels intentionally don't support. So you can use RHEL like any other Linux system, but you lose some of the advantages that made RHEL useful in the first place.


You're free to do whatever you want with your Linux distribution, but if you're not using the thing that the vendor supports, they're not going to support it. There's not really any way around it.

Unfortunately the kernel has significant and widespread implications on the system, so it's very hard to figure out how to support a system that is running a kernel version you haven't validated. As a RH customer, the way I handle this kind of thing is usually by replicating the problem on a "stock" system and working with RH support on fixing it on that system --- this is often an important step anyway since, for my own purposes, I need to figure out whether or not I introduced the problem by something I did.


A big part of RHEL IS their kernel. They spend a lot of time to optimize/stabilize their releases. Why would they support any other kernel?


Nothing stops people from using whatever distro they like, instead of RHEL. If they are using RHEL, presuming they’re rational actors, it’s because the RHEL model works for their use case.


Facebook runs CentOS with mainline kernels: https://www.youtube.com/watch?v=cA_Nd3crBuA&feature=youtu.be


I update some CentOS servers with ML kernel from ElRepo.org, and no problems for four years, but my stack could be minimalist (webservers) and do just fine


Suggest Oracle Linux. Their UEK gets refreshed on a regular basis and it's way closer to upstream. UEK6 just released for OL7 and OL8, based off the 5.4 kernel.

(disclaimer: I work for the Compute team in Oracle Cloud Infrastructure, and work with the OL team regularly on a number of things. It has been one of the more interesting aspects of the job)


While RedHat's backporting might not be great, i believe that upstream would do good if they would change their mind about having or not a well defined vulnerability identification and notification system.

It's understandable that almost every piece of kernel code could potentially be a bad actor thus it would be tough to identify if every fix has security implications or not.

Still there must be a middle ground around common exploitation methods.


And what would that "middle ground" look like?

With the current rate of change that the kernel community develops at, including the patches backported to the stable/longterm kernels, it's impossible to try to evaluate each and every patch for "is this something that could be exploited or not?"

Companies have tried, it was fun watching them, but they quickly gave up and declared it impossible and much safer to just take all stable patch updates instead.

I've also talked to MITRE about just applying for a CVE for ever stable kernel patch (20+ a day), and while they appreciated me not doing that, they agreed that the current model of CVEs just does not work at all for the Linux kernel and that what we are doing is fine.

See my Kernel Recipes talk last year for details about all of that if you are curious.


I understand that completely and i already know your thoughts on that and in some extend i do agree.

Still, i think we do have in hand a very characteristic issue that even without knowing the details, simply by searching commit messages for "crypto" "key" "buffer" etc it should alert somebody to give it a second and third look.


Hell no. You can't just "guess by words."

The only thing (far!) worse than no impact marking is incomplete impact marking. That's just giving people a way to cop out, and they WILL use it.


How is no marking better than some marking?

If there is a commit that refers to a "memory leak" why shouldn't be, at least superfluously, checked, identified and have distros informed? (e.g 2ca068be09bf8e285036603823696140026dcbe7)

If the crypto fix was assigned early as a vulnerability would have stayed unpatched for that long?


> How is no marking better than some marking?

With no marking it is clear what it means: commits have not been audited to identify security-relevant ones.

With partial, incomplete marking, unmarked commits can be one of two things: commits that have not been looked at, and commits that have been looked at and are believed to contain no security relevant changes.

The majority of commits will be in the "not looked at" category. And there's enough people around to have a significant subset of them be lazy, ignorant, unskilled or stupid and take that as "contains no security relevant changes."

P.S.: also, patches are already marked. By being included in the LTS series. Because that means they were important enough to get a backport — though not necessarily due to security impact.


I do agree with the premises, i don't agree with your conclusion.

Yes only a part of patches would be marked as such. That, major or minor, part would simply mean that people won't have to reinvent the particular wheel, as happened in this case. People won't be missing critical _discovered_ changes, the vulnerability would be discussed, recognized in its totality (PoC, documentation etc) and proper patches will be offered. There have been cases where LTS backports were old revisions of bad patches.

I think that baking LTS kernels is unnecessarily closer to an artistic approach of doing things.


> i believe that upstream would do good if they would change their mind about having or not a well defined vulnerability identification and notification system.

No. Unless an issue is tested against HEAD, reporting it is just noise.

Reporting a fixed issue is a faux pas one has to pay for: Listening to snarky comments from friends and co-workers, or in this case, because they should know better, a few kegs of beer for the next FOSS event...


Not every security fix is known to be a security fix at the time. If such a system existed, people would be overconfident in it and this would happen even more.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: