What was notable in this case was that Google disclosed this issue to Intel, and Intel then took responsibility for public disclosure. Intel then proceeded to mail patches to public trees without flagging them as having any security relevance, posted a public disclosure despite the patches only being in pre-release kernels and claimed that affected users should upgrade to 5.9 (which had just been released) which absolutely did not have the patches fixed.
Coordinating disclosure around security issues is hard, especially with a project like Linux where you have an extremely intricate mixture of long-term support kernels, vendor trees and distributions to deal with. Companies who maintain security-critical components really need to ensure that everyone knows how to approach this correctly, otherwise we end up with situations like this where the disclosure process actually increased the associated risks.
(I was part of Google security during the timeline included in this post, but wasn't involved with or aware of this vulnerability)
To me it feels like we owe Google a lot for what they are doing to find security vulnerabilities. Did companies do stuff like this before Google's Project Zero?
We benefit a lot from Google doing this, for sure. Do we owe them a lot? For that to be true, it’d have to have an altruistic motive and for the value being delivered to be less than Google derives from the open-source community / security community in general.
I understand the value derived vs. provided, but I disagree about it having to be altruistic. Someone donating large amounts of money to charity just to write it off on their taxes isn’t doing so altruistically, but is still doing a lot of good. I’d say we still owe them gratitude - or at the very least, the people they’re helping.
> Someone donating large amounts of money to charity just to write it off on their taxes isn’t doing so altruistically
I mean, it's still altruistic because they're going to 'lose wealth' by donating. You're never going to earn back in tax write-offs as much as you spent by donating.
Sort of... most companies that fund this type of research do it for product development (IDS/IPS related stuff), or as vulnerability development (for sale as parts of exploit packs or use in engagements). This is a gross over generalization, but like everything else related to the disclosure and research field, there is a lot of history and drama involved.
from the guys who brought you "the spectre patch in the kernel thats disabled by default" and "ignore your doubts, hyperthreading is still safe" comes "the incredible patch built solely around shareholder confidence and breakroom communication"
> "the meltdown patch in the kernel thats disabled by default"
I'm not sure to what you are referring to here.
I was one of the people who worked on the Linux PTI implementation that mitigated Meltdown. My memory is not perfect, but I honestly don't know what you're referring to.
> The whole IBRS_ALL feature to me very clearly says "Intel is not serious about this, we'll have a ugly hack that will be so expensive that we don't want to enable it by default, because that would look bad in benchmarks".
It seems odd that they would have notified Intel rather than the security team, given how poorly Intel has handled disclosures in the past... Its good that they do note that "The Linux Kernel Security team should have been notified in order to facilitate coordination, and any future vulnerabilities of this type will also be reported to them"
Thank the lord for distros like Fedora. They deal with security and other issues, and are big enough that if Intel tried to sneak something past, almost assuredly Intel's engineers working on Red Hat would have noticed something.
It's like Intel is releasing the same CPUs with new boxes and naming hoping that people won't find out :-) That company has become such a failure. What happened 5 years ago?
Coordinating disclosure around security issues is hard, especially with a project like Linux where you have an extremely intricate mixture of long-term support kernels, vendor trees and distributions to deal with. Companies who maintain security-critical components really need to ensure that everyone knows how to approach this correctly, otherwise we end up with situations like this where the disclosure process actually increased the associated risks.
(I was part of Google security during the timeline included in this post, but wasn't involved with or aware of this vulnerability)